id
stringlengths 11
95
| author
stringlengths 3
36
| task_category
stringclasses 16
values | tags
sequencelengths 1
4.05k
| created_time
int64 1.65k
1.74k
| last_modified
int64 1.62k
1.74k
| downloads
int64 0
15.6M
| likes
int64 0
4.86k
| README
stringlengths 246
1.01M
| matched_task
sequencelengths 1
8
| matched_bigbio_names
sequencelengths 1
8
| is_bionlp
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|---|
Alibaba-NLP/gte-base-en-v1.5 | Alibaba-NLP | sentence-similarity | [
"transformers",
"onnx",
"safetensors",
"new",
"feature-extraction",
"sentence-transformers",
"gte",
"mteb",
"transformers.js",
"sentence-similarity",
"custom_code",
"en",
"arxiv:2407.19669",
"arxiv:2308.03281",
"license:apache-2.0",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,713 | 1,731 | 2,607,332 | 63 | ---
language:
- en
library_name: transformers
license: apache-2.0
tags:
- sentence-transformers
- gte
- mteb
- transformers.js
- sentence-similarity
model-index:
- name: gte-base-en-v1.5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.7910447761194
- type: ap
value: 37.053785713650626
- type: f1
value: 68.51101510998551
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.016875
- type: ap
value: 89.17750268426342
- type: f1
value: 92.9970977240524
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.312000000000005
- type: f1
value: 52.98175784163017
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 38.193
- type: map_at_10
value: 54.848
- type: map_at_100
value: 55.388000000000005
- type: map_at_1000
value: 55.388999999999996
- type: map_at_3
value: 50.427
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 39.047
- type: mrr_at_10
value: 55.153
- type: mrr_at_100
value: 55.686
- type: mrr_at_1000
value: 55.688
- type: mrr_at_3
value: 50.676
- type: mrr_at_5
value: 53.417
- type: ndcg_at_1
value: 38.193
- type: ndcg_at_10
value: 63.486
- type: ndcg_at_100
value: 65.58
- type: ndcg_at_1000
value: 65.61
- type: ndcg_at_3
value: 54.494
- type: ndcg_at_5
value: 59.339
- type: precision_at_1
value: 38.193
- type: precision_at_10
value: 9.075
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.096
- type: precision_at_5
value: 15.619
- type: recall_at_1
value: 38.193
- type: recall_at_10
value: 90.754
- type: recall_at_100
value: 99.431
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.28699999999999
- type: recall_at_5
value: 78.094
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.508221208908964
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.04668382560096
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.828759903716815
- type: mrr
value: 74.37343358395991
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.03673698773017
- type: cos_sim_spearman
value: 83.6470866785058
- type: euclidean_pearson
value: 82.64048673096565
- type: euclidean_spearman
value: 83.63142367101115
- type: manhattan_pearson
value: 82.71493099760228
- type: manhattan_spearman
value: 83.60491704294326
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.73376623376623
- type: f1
value: 86.70294049278262
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.31923804167062
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 37.552547125348454
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 30.567
- type: map_at_10
value: 41.269
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.84
- type: map_at_3
value: 37.567
- type: map_at_5
value: 39.706
- type: mrr_at_1
value: 37.053000000000004
- type: mrr_at_10
value: 46.900999999999996
- type: mrr_at_100
value: 47.662
- type: mrr_at_1000
value: 47.713
- type: mrr_at_3
value: 43.801
- type: mrr_at_5
value: 45.689
- type: ndcg_at_1
value: 37.053000000000004
- type: ndcg_at_10
value: 47.73
- type: ndcg_at_100
value: 53.128
- type: ndcg_at_1000
value: 55.300000000000004
- type: ndcg_at_3
value: 42.046
- type: ndcg_at_5
value: 44.782
- type: precision_at_1
value: 37.053000000000004
- type: precision_at_10
value: 9.142
- type: precision_at_100
value: 1.485
- type: precision_at_1000
value: 0.197
- type: precision_at_3
value: 20.076
- type: precision_at_5
value: 14.535
- type: recall_at_1
value: 30.567
- type: recall_at_10
value: 60.602999999999994
- type: recall_at_100
value: 83.22800000000001
- type: recall_at_1000
value: 96.696
- type: recall_at_3
value: 44.336999999999996
- type: recall_at_5
value: 51.949
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 28.538000000000004
- type: map_at_10
value: 38.757999999999996
- type: map_at_100
value: 40.129
- type: map_at_1000
value: 40.262
- type: map_at_3
value: 35.866
- type: map_at_5
value: 37.417
- type: mrr_at_1
value: 36.051
- type: mrr_at_10
value: 44.868
- type: mrr_at_100
value: 45.568999999999996
- type: mrr_at_1000
value: 45.615
- type: mrr_at_3
value: 42.558
- type: mrr_at_5
value: 43.883
- type: ndcg_at_1
value: 36.051
- type: ndcg_at_10
value: 44.584
- type: ndcg_at_100
value: 49.356
- type: ndcg_at_1000
value: 51.39
- type: ndcg_at_3
value: 40.389
- type: ndcg_at_5
value: 42.14
- type: precision_at_1
value: 36.051
- type: precision_at_10
value: 8.446
- type: precision_at_100
value: 1.411
- type: precision_at_1000
value: 0.19
- type: precision_at_3
value: 19.639
- type: precision_at_5
value: 13.796
- type: recall_at_1
value: 28.538000000000004
- type: recall_at_10
value: 54.99000000000001
- type: recall_at_100
value: 75.098
- type: recall_at_1000
value: 87.848
- type: recall_at_3
value: 42.236000000000004
- type: recall_at_5
value: 47.377
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 37.188
- type: map_at_10
value: 50.861000000000004
- type: map_at_100
value: 51.917
- type: map_at_1000
value: 51.964999999999996
- type: map_at_3
value: 47.144000000000005
- type: map_at_5
value: 49.417
- type: mrr_at_1
value: 42.571
- type: mrr_at_10
value: 54.086999999999996
- type: mrr_at_100
value: 54.739000000000004
- type: mrr_at_1000
value: 54.762
- type: mrr_at_3
value: 51.285000000000004
- type: mrr_at_5
value: 53.0
- type: ndcg_at_1
value: 42.571
- type: ndcg_at_10
value: 57.282
- type: ndcg_at_100
value: 61.477000000000004
- type: ndcg_at_1000
value: 62.426
- type: ndcg_at_3
value: 51.0
- type: ndcg_at_5
value: 54.346000000000004
- type: precision_at_1
value: 42.571
- type: precision_at_10
value: 9.467
- type: precision_at_100
value: 1.2550000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 23.114
- type: precision_at_5
value: 16.250999999999998
- type: recall_at_1
value: 37.188
- type: recall_at_10
value: 73.068
- type: recall_at_100
value: 91.203
- type: recall_at_1000
value: 97.916
- type: recall_at_3
value: 56.552
- type: recall_at_5
value: 64.567
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 25.041000000000004
- type: map_at_10
value: 33.86
- type: map_at_100
value: 34.988
- type: map_at_1000
value: 35.064
- type: map_at_3
value: 31.049
- type: map_at_5
value: 32.845
- type: mrr_at_1
value: 26.893
- type: mrr_at_10
value: 35.594
- type: mrr_at_100
value: 36.617
- type: mrr_at_1000
value: 36.671
- type: mrr_at_3
value: 33.051
- type: mrr_at_5
value: 34.61
- type: ndcg_at_1
value: 26.893
- type: ndcg_at_10
value: 38.674
- type: ndcg_at_100
value: 44.178
- type: ndcg_at_1000
value: 46.089999999999996
- type: ndcg_at_3
value: 33.485
- type: ndcg_at_5
value: 36.402
- type: precision_at_1
value: 26.893
- type: precision_at_10
value: 5.989
- type: precision_at_100
value: 0.918
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 14.2
- type: precision_at_5
value: 10.26
- type: recall_at_1
value: 25.041000000000004
- type: recall_at_10
value: 51.666000000000004
- type: recall_at_100
value: 76.896
- type: recall_at_1000
value: 91.243
- type: recall_at_3
value: 38.035999999999994
- type: recall_at_5
value: 44.999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 15.909999999999998
- type: map_at_10
value: 23.901
- type: map_at_100
value: 25.165
- type: map_at_1000
value: 25.291000000000004
- type: map_at_3
value: 21.356
- type: map_at_5
value: 22.816
- type: mrr_at_1
value: 20.025000000000002
- type: mrr_at_10
value: 28.382
- type: mrr_at_100
value: 29.465000000000003
- type: mrr_at_1000
value: 29.535
- type: mrr_at_3
value: 25.933
- type: mrr_at_5
value: 27.332
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 29.099000000000004
- type: ndcg_at_100
value: 35.127
- type: ndcg_at_1000
value: 38.096000000000004
- type: ndcg_at_3
value: 24.464
- type: ndcg_at_5
value: 26.709
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 5.398
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 11.774
- type: precision_at_5
value: 8.632
- type: recall_at_1
value: 15.909999999999998
- type: recall_at_10
value: 40.672000000000004
- type: recall_at_100
value: 66.855
- type: recall_at_1000
value: 87.922
- type: recall_at_3
value: 28.069
- type: recall_at_5
value: 33.812
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 30.175
- type: map_at_10
value: 41.36
- type: map_at_100
value: 42.701
- type: map_at_1000
value: 42.817
- type: map_at_3
value: 37.931
- type: map_at_5
value: 39.943
- type: mrr_at_1
value: 35.611
- type: mrr_at_10
value: 46.346
- type: mrr_at_100
value: 47.160000000000004
- type: mrr_at_1000
value: 47.203
- type: mrr_at_3
value: 43.712
- type: mrr_at_5
value: 45.367000000000004
- type: ndcg_at_1
value: 35.611
- type: ndcg_at_10
value: 47.532000000000004
- type: ndcg_at_100
value: 53.003
- type: ndcg_at_1000
value: 55.007
- type: ndcg_at_3
value: 42.043
- type: ndcg_at_5
value: 44.86
- type: precision_at_1
value: 35.611
- type: precision_at_10
value: 8.624
- type: precision_at_100
value: 1.332
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 20.083000000000002
- type: precision_at_5
value: 14.437
- type: recall_at_1
value: 30.175
- type: recall_at_10
value: 60.5
- type: recall_at_100
value: 83.399
- type: recall_at_1000
value: 96.255
- type: recall_at_3
value: 45.448
- type: recall_at_5
value: 52.432
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 22.467000000000002
- type: map_at_10
value: 33.812999999999995
- type: map_at_100
value: 35.248000000000005
- type: map_at_1000
value: 35.359
- type: map_at_3
value: 30.316
- type: map_at_5
value: 32.233000000000004
- type: mrr_at_1
value: 28.310999999999996
- type: mrr_at_10
value: 38.979
- type: mrr_at_100
value: 39.937
- type: mrr_at_1000
value: 39.989999999999995
- type: mrr_at_3
value: 36.244
- type: mrr_at_5
value: 37.871
- type: ndcg_at_1
value: 28.310999999999996
- type: ndcg_at_10
value: 40.282000000000004
- type: ndcg_at_100
value: 46.22
- type: ndcg_at_1000
value: 48.507
- type: ndcg_at_3
value: 34.596
- type: ndcg_at_5
value: 37.267
- type: precision_at_1
value: 28.310999999999996
- type: precision_at_10
value: 7.831
- type: precision_at_100
value: 1.257
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 17.275
- type: precision_at_5
value: 12.556999999999999
- type: recall_at_1
value: 22.467000000000002
- type: recall_at_10
value: 54.14099999999999
- type: recall_at_100
value: 79.593
- type: recall_at_1000
value: 95.063
- type: recall_at_3
value: 38.539
- type: recall_at_5
value: 45.403
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 24.18591666666667
- type: map_at_10
value: 33.84258333333333
- type: map_at_100
value: 35.11391666666666
- type: map_at_1000
value: 35.23258333333333
- type: map_at_3
value: 30.764249999999997
- type: map_at_5
value: 32.52333333333334
- type: mrr_at_1
value: 28.54733333333333
- type: mrr_at_10
value: 37.81725
- type: mrr_at_100
value: 38.716499999999996
- type: mrr_at_1000
value: 38.77458333333333
- type: mrr_at_3
value: 35.157833333333336
- type: mrr_at_5
value: 36.69816666666667
- type: ndcg_at_1
value: 28.54733333333333
- type: ndcg_at_10
value: 39.51508333333334
- type: ndcg_at_100
value: 44.95316666666666
- type: ndcg_at_1000
value: 47.257083333333334
- type: ndcg_at_3
value: 34.205833333333324
- type: ndcg_at_5
value: 36.78266666666667
- type: precision_at_1
value: 28.54733333333333
- type: precision_at_10
value: 7.082583333333334
- type: precision_at_100
value: 1.1590833333333332
- type: precision_at_1000
value: 0.15516666666666662
- type: precision_at_3
value: 15.908750000000001
- type: precision_at_5
value: 11.505416666666669
- type: recall_at_1
value: 24.18591666666667
- type: recall_at_10
value: 52.38758333333333
- type: recall_at_100
value: 76.13666666666667
- type: recall_at_1000
value: 91.99066666666667
- type: recall_at_3
value: 37.78333333333334
- type: recall_at_5
value: 44.30141666666666
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 21.975
- type: map_at_10
value: 29.781000000000002
- type: map_at_100
value: 30.847
- type: map_at_1000
value: 30.94
- type: map_at_3
value: 27.167
- type: map_at_5
value: 28.633999999999997
- type: mrr_at_1
value: 24.387
- type: mrr_at_10
value: 32.476
- type: mrr_at_100
value: 33.337
- type: mrr_at_1000
value: 33.403
- type: mrr_at_3
value: 29.881999999999998
- type: mrr_at_5
value: 31.339
- type: ndcg_at_1
value: 24.387
- type: ndcg_at_10
value: 34.596
- type: ndcg_at_100
value: 39.635
- type: ndcg_at_1000
value: 42.079
- type: ndcg_at_3
value: 29.516
- type: ndcg_at_5
value: 31.959
- type: precision_at_1
value: 24.387
- type: precision_at_10
value: 5.6129999999999995
- type: precision_at_100
value: 0.8909999999999999
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.73
- type: precision_at_5
value: 9.171999999999999
- type: recall_at_1
value: 21.975
- type: recall_at_10
value: 46.826
- type: recall_at_100
value: 69.554
- type: recall_at_1000
value: 87.749
- type: recall_at_3
value: 33.016
- type: recall_at_5
value: 38.97
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 15.614
- type: map_at_10
value: 22.927
- type: map_at_100
value: 24.185000000000002
- type: map_at_1000
value: 24.319
- type: map_at_3
value: 20.596
- type: map_at_5
value: 21.854000000000003
- type: mrr_at_1
value: 18.858
- type: mrr_at_10
value: 26.535999999999998
- type: mrr_at_100
value: 27.582
- type: mrr_at_1000
value: 27.665
- type: mrr_at_3
value: 24.295
- type: mrr_at_5
value: 25.532
- type: ndcg_at_1
value: 18.858
- type: ndcg_at_10
value: 27.583000000000002
- type: ndcg_at_100
value: 33.635
- type: ndcg_at_1000
value: 36.647
- type: ndcg_at_3
value: 23.348
- type: ndcg_at_5
value: 25.257
- type: precision_at_1
value: 18.858
- type: precision_at_10
value: 5.158
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 11.092
- type: precision_at_5
value: 8.1
- type: recall_at_1
value: 15.614
- type: recall_at_10
value: 37.916
- type: recall_at_100
value: 65.205
- type: recall_at_1000
value: 86.453
- type: recall_at_3
value: 26.137
- type: recall_at_5
value: 31.087999999999997
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 23.078000000000003
- type: map_at_10
value: 31.941999999999997
- type: map_at_100
value: 33.196999999999996
- type: map_at_1000
value: 33.303
- type: map_at_3
value: 28.927000000000003
- type: map_at_5
value: 30.707
- type: mrr_at_1
value: 26.866
- type: mrr_at_10
value: 35.557
- type: mrr_at_100
value: 36.569
- type: mrr_at_1000
value: 36.632
- type: mrr_at_3
value: 32.897999999999996
- type: mrr_at_5
value: 34.437
- type: ndcg_at_1
value: 26.866
- type: ndcg_at_10
value: 37.372
- type: ndcg_at_100
value: 43.248
- type: ndcg_at_1000
value: 45.632
- type: ndcg_at_3
value: 31.852999999999998
- type: ndcg_at_5
value: 34.582
- type: precision_at_1
value: 26.866
- type: precision_at_10
value: 6.511
- type: precision_at_100
value: 1.078
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 14.582999999999998
- type: precision_at_5
value: 10.634
- type: recall_at_1
value: 23.078000000000003
- type: recall_at_10
value: 50.334
- type: recall_at_100
value: 75.787
- type: recall_at_1000
value: 92.485
- type: recall_at_3
value: 35.386
- type: recall_at_5
value: 42.225
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 22.203999999999997
- type: map_at_10
value: 31.276
- type: map_at_100
value: 32.844
- type: map_at_1000
value: 33.062999999999995
- type: map_at_3
value: 27.733999999999998
- type: map_at_5
value: 29.64
- type: mrr_at_1
value: 27.272999999999996
- type: mrr_at_10
value: 36.083
- type: mrr_at_100
value: 37.008
- type: mrr_at_1000
value: 37.076
- type: mrr_at_3
value: 33.004
- type: mrr_at_5
value: 34.664
- type: ndcg_at_1
value: 27.272999999999996
- type: ndcg_at_10
value: 37.763000000000005
- type: ndcg_at_100
value: 43.566
- type: ndcg_at_1000
value: 46.356
- type: ndcg_at_3
value: 31.673000000000002
- type: ndcg_at_5
value: 34.501
- type: precision_at_1
value: 27.272999999999996
- type: precision_at_10
value: 7.470000000000001
- type: precision_at_100
value: 1.502
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 14.756
- type: precision_at_5
value: 11.225
- type: recall_at_1
value: 22.203999999999997
- type: recall_at_10
value: 51.437999999999995
- type: recall_at_100
value: 76.845
- type: recall_at_1000
value: 94.38600000000001
- type: recall_at_3
value: 34.258
- type: recall_at_5
value: 41.512
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 17.474
- type: map_at_10
value: 26.362999999999996
- type: map_at_100
value: 27.456999999999997
- type: map_at_1000
value: 27.567999999999998
- type: map_at_3
value: 23.518
- type: map_at_5
value: 25.068
- type: mrr_at_1
value: 18.669
- type: mrr_at_10
value: 27.998
- type: mrr_at_100
value: 28.953
- type: mrr_at_1000
value: 29.03
- type: mrr_at_3
value: 25.230999999999998
- type: mrr_at_5
value: 26.654
- type: ndcg_at_1
value: 18.669
- type: ndcg_at_10
value: 31.684
- type: ndcg_at_100
value: 36.864999999999995
- type: ndcg_at_1000
value: 39.555
- type: ndcg_at_3
value: 26.057000000000002
- type: ndcg_at_5
value: 28.587
- type: precision_at_1
value: 18.669
- type: precision_at_10
value: 5.3420000000000005
- type: precision_at_100
value: 0.847
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 11.583
- type: precision_at_5
value: 8.466
- type: recall_at_1
value: 17.474
- type: recall_at_10
value: 46.497
- type: recall_at_100
value: 69.977
- type: recall_at_1000
value: 89.872
- type: recall_at_3
value: 31.385999999999996
- type: recall_at_5
value: 37.283
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 17.173
- type: map_at_10
value: 30.407
- type: map_at_100
value: 32.528
- type: map_at_1000
value: 32.698
- type: map_at_3
value: 25.523
- type: map_at_5
value: 28.038
- type: mrr_at_1
value: 38.958
- type: mrr_at_10
value: 51.515
- type: mrr_at_100
value: 52.214000000000006
- type: mrr_at_1000
value: 52.237
- type: mrr_at_3
value: 48.502
- type: mrr_at_5
value: 50.251000000000005
- type: ndcg_at_1
value: 38.958
- type: ndcg_at_10
value: 40.355000000000004
- type: ndcg_at_100
value: 47.68
- type: ndcg_at_1000
value: 50.370000000000005
- type: ndcg_at_3
value: 33.946
- type: ndcg_at_5
value: 36.057
- type: precision_at_1
value: 38.958
- type: precision_at_10
value: 12.508
- type: precision_at_100
value: 2.054
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 25.581
- type: precision_at_5
value: 19.256999999999998
- type: recall_at_1
value: 17.173
- type: recall_at_10
value: 46.967
- type: recall_at_100
value: 71.47200000000001
- type: recall_at_1000
value: 86.238
- type: recall_at_3
value: 30.961
- type: recall_at_5
value: 37.539
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 8.999
- type: map_at_10
value: 18.989
- type: map_at_100
value: 26.133
- type: map_at_1000
value: 27.666
- type: map_at_3
value: 13.918
- type: map_at_5
value: 16.473
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.161
- type: mrr_at_100
value: 74.516
- type: mrr_at_1000
value: 74.524
- type: mrr_at_3
value: 72.875
- type: mrr_at_5
value: 73.613
- type: ndcg_at_1
value: 54.37499999999999
- type: ndcg_at_10
value: 39.902
- type: ndcg_at_100
value: 44.212
- type: ndcg_at_1000
value: 51.62
- type: ndcg_at_3
value: 45.193
- type: ndcg_at_5
value: 42.541000000000004
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 30.425
- type: precision_at_100
value: 9.754999999999999
- type: precision_at_1000
value: 2.043
- type: precision_at_3
value: 48.25
- type: precision_at_5
value: 40.65
- type: recall_at_1
value: 8.999
- type: recall_at_10
value: 24.133
- type: recall_at_100
value: 49.138999999999996
- type: recall_at_1000
value: 72.639
- type: recall_at_3
value: 15.287999999999998
- type: recall_at_5
value: 19.415
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.38999999999999
- type: f1
value: 41.444205512055234
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.35000000000001
- type: map_at_10
value: 92.837
- type: map_at_100
value: 92.996
- type: map_at_1000
value: 93.006
- type: map_at_3
value: 92.187
- type: map_at_5
value: 92.595
- type: mrr_at_1
value: 93.864
- type: mrr_at_10
value: 96.723
- type: mrr_at_100
value: 96.72500000000001
- type: mrr_at_1000
value: 96.72500000000001
- type: mrr_at_3
value: 96.64
- type: mrr_at_5
value: 96.71499999999999
- type: ndcg_at_1
value: 93.864
- type: ndcg_at_10
value: 94.813
- type: ndcg_at_100
value: 95.243
- type: ndcg_at_1000
value: 95.38600000000001
- type: ndcg_at_3
value: 94.196
- type: ndcg_at_5
value: 94.521
- type: precision_at_1
value: 93.864
- type: precision_at_10
value: 10.951
- type: precision_at_100
value: 1.1400000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 35.114000000000004
- type: precision_at_5
value: 21.476
- type: recall_at_1
value: 87.35000000000001
- type: recall_at_10
value: 96.941
- type: recall_at_100
value: 98.397
- type: recall_at_1000
value: 99.21600000000001
- type: recall_at_3
value: 95.149
- type: recall_at_5
value: 96.131
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 24.476
- type: map_at_10
value: 40.11
- type: map_at_100
value: 42.229
- type: map_at_1000
value: 42.378
- type: map_at_3
value: 34.512
- type: map_at_5
value: 38.037
- type: mrr_at_1
value: 47.839999999999996
- type: mrr_at_10
value: 57.053
- type: mrr_at_100
value: 57.772
- type: mrr_at_1000
value: 57.799
- type: mrr_at_3
value: 54.552
- type: mrr_at_5
value: 56.011
- type: ndcg_at_1
value: 47.839999999999996
- type: ndcg_at_10
value: 48.650999999999996
- type: ndcg_at_100
value: 55.681000000000004
- type: ndcg_at_1000
value: 57.979
- type: ndcg_at_3
value: 43.923
- type: ndcg_at_5
value: 46.037
- type: precision_at_1
value: 47.839999999999996
- type: precision_at_10
value: 13.395000000000001
- type: precision_at_100
value: 2.0660000000000003
- type: precision_at_1000
value: 0.248
- type: precision_at_3
value: 29.064
- type: precision_at_5
value: 22.006
- type: recall_at_1
value: 24.476
- type: recall_at_10
value: 56.216
- type: recall_at_100
value: 81.798
- type: recall_at_1000
value: 95.48299999999999
- type: recall_at_3
value: 39.357
- type: recall_at_5
value: 47.802
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 42.728
- type: map_at_10
value: 57.737
- type: map_at_100
value: 58.531
- type: map_at_1000
value: 58.594
- type: map_at_3
value: 54.869
- type: map_at_5
value: 56.55
- type: mrr_at_1
value: 85.456
- type: mrr_at_10
value: 90.062
- type: mrr_at_100
value: 90.159
- type: mrr_at_1000
value: 90.16
- type: mrr_at_3
value: 89.37899999999999
- type: mrr_at_5
value: 89.81
- type: ndcg_at_1
value: 85.456
- type: ndcg_at_10
value: 67.755
- type: ndcg_at_100
value: 70.341
- type: ndcg_at_1000
value: 71.538
- type: ndcg_at_3
value: 63.735
- type: ndcg_at_5
value: 65.823
- type: precision_at_1
value: 85.456
- type: precision_at_10
value: 13.450000000000001
- type: precision_at_100
value: 1.545
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 38.861000000000004
- type: precision_at_5
value: 24.964
- type: recall_at_1
value: 42.728
- type: recall_at_10
value: 67.252
- type: recall_at_100
value: 77.265
- type: recall_at_1000
value: 85.246
- type: recall_at_3
value: 58.292
- type: recall_at_5
value: 62.41100000000001
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 87.4836
- type: ap
value: 82.29552224030336
- type: f1
value: 87.42791432227448
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 23.015
- type: map_at_10
value: 35.621
- type: map_at_100
value: 36.809
- type: map_at_1000
value: 36.853
- type: map_at_3
value: 31.832
- type: map_at_5
value: 34.006
- type: mrr_at_1
value: 23.738999999999997
- type: mrr_at_10
value: 36.309999999999995
- type: mrr_at_100
value: 37.422
- type: mrr_at_1000
value: 37.461
- type: mrr_at_3
value: 32.592999999999996
- type: mrr_at_5
value: 34.736
- type: ndcg_at_1
value: 23.724999999999998
- type: ndcg_at_10
value: 42.617
- type: ndcg_at_100
value: 48.217999999999996
- type: ndcg_at_1000
value: 49.309
- type: ndcg_at_3
value: 34.905
- type: ndcg_at_5
value: 38.769
- type: precision_at_1
value: 23.724999999999998
- type: precision_at_10
value: 6.689
- type: precision_at_100
value: 0.9480000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.89
- type: precision_at_5
value: 10.897
- type: recall_at_1
value: 23.015
- type: recall_at_10
value: 64.041
- type: recall_at_100
value: 89.724
- type: recall_at_1000
value: 98.00999999999999
- type: recall_at_3
value: 43.064
- type: recall_at_5
value: 52.31099999999999
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.49794801641588
- type: f1
value: 96.28931114498003
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 82.81121751025992
- type: f1
value: 63.18740125901853
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.66644250168123
- type: f1
value: 74.93211186867839
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 81.77202420981843
- type: f1
value: 81.63681969283554
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.596687684870645
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.26965660101405
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.33619694846802
- type: mrr
value: 32.53719657720334
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.0729999999999995
- type: map_at_10
value: 13.245999999999999
- type: map_at_100
value: 16.747999999999998
- type: map_at_1000
value: 18.163
- type: map_at_3
value: 10.064
- type: map_at_5
value: 11.513
- type: mrr_at_1
value: 49.536
- type: mrr_at_10
value: 58.092
- type: mrr_at_100
value: 58.752
- type: mrr_at_1000
value: 58.78
- type: mrr_at_3
value: 56.398
- type: mrr_at_5
value: 57.389
- type: ndcg_at_1
value: 47.059
- type: ndcg_at_10
value: 35.881
- type: ndcg_at_100
value: 32.751999999999995
- type: ndcg_at_1000
value: 41.498000000000005
- type: ndcg_at_3
value: 42.518
- type: ndcg_at_5
value: 39.550999999999995
- type: precision_at_1
value: 49.536
- type: precision_at_10
value: 26.316
- type: precision_at_100
value: 8.084
- type: precision_at_1000
value: 2.081
- type: precision_at_3
value: 39.938
- type: precision_at_5
value: 34.056
- type: recall_at_1
value: 6.0729999999999995
- type: recall_at_10
value: 16.593
- type: recall_at_100
value: 32.883
- type: recall_at_1000
value: 64.654
- type: recall_at_3
value: 11.174000000000001
- type: recall_at_5
value: 13.528
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 30.043
- type: map_at_10
value: 45.318999999999996
- type: map_at_100
value: 46.381
- type: map_at_1000
value: 46.412
- type: map_at_3
value: 40.941
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 33.98
- type: mrr_at_10
value: 47.870000000000005
- type: mrr_at_100
value: 48.681999999999995
- type: mrr_at_1000
value: 48.703
- type: mrr_at_3
value: 44.341
- type: mrr_at_5
value: 46.547
- type: ndcg_at_1
value: 33.98
- type: ndcg_at_10
value: 52.957
- type: ndcg_at_100
value: 57.434
- type: ndcg_at_1000
value: 58.103
- type: ndcg_at_3
value: 44.896
- type: ndcg_at_5
value: 49.353
- type: precision_at_1
value: 33.98
- type: precision_at_10
value: 8.786
- type: precision_at_100
value: 1.1280000000000001
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 20.577
- type: precision_at_5
value: 14.942
- type: recall_at_1
value: 30.043
- type: recall_at_10
value: 73.593
- type: recall_at_100
value: 93.026
- type: recall_at_1000
value: 97.943
- type: recall_at_3
value: 52.955
- type: recall_at_5
value: 63.132
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.808
- type: map_at_10
value: 84.675
- type: map_at_100
value: 85.322
- type: map_at_1000
value: 85.33800000000001
- type: map_at_3
value: 81.68900000000001
- type: map_at_5
value: 83.543
- type: mrr_at_1
value: 81.5
- type: mrr_at_10
value: 87.59700000000001
- type: mrr_at_100
value: 87.705
- type: mrr_at_1000
value: 87.70599999999999
- type: mrr_at_3
value: 86.607
- type: mrr_at_5
value: 87.289
- type: ndcg_at_1
value: 81.51
- type: ndcg_at_10
value: 88.41799999999999
- type: ndcg_at_100
value: 89.644
- type: ndcg_at_1000
value: 89.725
- type: ndcg_at_3
value: 85.49900000000001
- type: ndcg_at_5
value: 87.078
- type: precision_at_1
value: 81.51
- type: precision_at_10
value: 13.438
- type: precision_at_100
value: 1.532
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.363
- type: precision_at_5
value: 24.57
- type: recall_at_1
value: 70.808
- type: recall_at_10
value: 95.575
- type: recall_at_100
value: 99.667
- type: recall_at_1000
value: 99.98899999999999
- type: recall_at_3
value: 87.223
- type: recall_at_5
value: 91.682
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 58.614831329137715
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 66.86580408560826
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.093
- type: map_at_10
value: 13.014000000000001
- type: map_at_100
value: 15.412999999999998
- type: map_at_1000
value: 15.756999999999998
- type: map_at_3
value: 9.216000000000001
- type: map_at_5
value: 11.036999999999999
- type: mrr_at_1
value: 25.1
- type: mrr_at_10
value: 37.133
- type: mrr_at_100
value: 38.165
- type: mrr_at_1000
value: 38.198
- type: mrr_at_3
value: 33.217
- type: mrr_at_5
value: 35.732
- type: ndcg_at_1
value: 25.1
- type: ndcg_at_10
value: 21.918000000000003
- type: ndcg_at_100
value: 30.983
- type: ndcg_at_1000
value: 36.629
- type: ndcg_at_3
value: 20.544999999999998
- type: ndcg_at_5
value: 18.192
- type: precision_at_1
value: 25.1
- type: precision_at_10
value: 11.44
- type: precision_at_100
value: 2.459
- type: precision_at_1000
value: 0.381
- type: precision_at_3
value: 19.267
- type: precision_at_5
value: 16.16
- type: recall_at_1
value: 5.093
- type: recall_at_10
value: 23.215
- type: recall_at_100
value: 49.902
- type: recall_at_1000
value: 77.403
- type: recall_at_3
value: 11.733
- type: recall_at_5
value: 16.372999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.9365442977452
- type: cos_sim_spearman
value: 79.36960687383745
- type: euclidean_pearson
value: 79.6045204840714
- type: euclidean_spearman
value: 79.26382712751337
- type: manhattan_pearson
value: 79.4805084789529
- type: manhattan_spearman
value: 79.21847863209523
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 83.27906192961453
- type: cos_sim_spearman
value: 74.38364712099211
- type: euclidean_pearson
value: 78.54358927241223
- type: euclidean_spearman
value: 74.22185560806376
- type: manhattan_pearson
value: 78.50904327377751
- type: manhattan_spearman
value: 74.2627500781748
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.66863742649639
- type: cos_sim_spearman
value: 84.70630905216271
- type: euclidean_pearson
value: 84.64498334705334
- type: euclidean_spearman
value: 84.87204770690148
- type: manhattan_pearson
value: 84.65774227976077
- type: manhattan_spearman
value: 84.91251851797985
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.1577763924467
- type: cos_sim_spearman
value: 80.10314039230198
- type: euclidean_pearson
value: 81.51346991046043
- type: euclidean_spearman
value: 80.08678485109435
- type: manhattan_pearson
value: 81.57058914661894
- type: manhattan_spearman
value: 80.1516230725106
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.40310839662533
- type: cos_sim_spearman
value: 87.16293477217867
- type: euclidean_pearson
value: 86.50688711184775
- type: euclidean_spearman
value: 87.08651444923031
- type: manhattan_pearson
value: 86.54674677557857
- type: manhattan_spearman
value: 87.15079017870971
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.32886275207817
- type: cos_sim_spearman
value: 85.0190460590732
- type: euclidean_pearson
value: 84.42553652784679
- type: euclidean_spearman
value: 85.20027364279328
- type: manhattan_pearson
value: 84.42926246281078
- type: manhattan_spearman
value: 85.20187419804306
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.76732216967812
- type: cos_sim_spearman
value: 90.63701653633909
- type: euclidean_pearson
value: 90.26678186114682
- type: euclidean_spearman
value: 90.67288073455427
- type: manhattan_pearson
value: 90.20772020584582
- type: manhattan_spearman
value: 90.60764863983702
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 69.09280387698125
- type: cos_sim_spearman
value: 68.62743151172162
- type: euclidean_pearson
value: 69.89386398104689
- type: euclidean_spearman
value: 68.71191066733556
- type: manhattan_pearson
value: 69.92516500604872
- type: manhattan_spearman
value: 68.80452846992576
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.13178592019887
- type: cos_sim_spearman
value: 86.03947178806887
- type: euclidean_pearson
value: 85.87029414285313
- type: euclidean_spearman
value: 86.04960843306998
- type: manhattan_pearson
value: 85.92946858580146
- type: manhattan_spearman
value: 86.12575341860442
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.16657063002837
- type: mrr
value: 95.73671063867141
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 63.510999999999996
- type: map_at_10
value: 72.76899999999999
- type: map_at_100
value: 73.303
- type: map_at_1000
value: 73.32499999999999
- type: map_at_3
value: 70.514
- type: map_at_5
value: 71.929
- type: mrr_at_1
value: 66.333
- type: mrr_at_10
value: 73.75
- type: mrr_at_100
value: 74.119
- type: mrr_at_1000
value: 74.138
- type: mrr_at_3
value: 72.222
- type: mrr_at_5
value: 73.122
- type: ndcg_at_1
value: 66.333
- type: ndcg_at_10
value: 76.774
- type: ndcg_at_100
value: 78.78500000000001
- type: ndcg_at_1000
value: 79.254
- type: ndcg_at_3
value: 73.088
- type: ndcg_at_5
value: 75.002
- type: precision_at_1
value: 66.333
- type: precision_at_10
value: 9.833
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.222
- type: precision_at_5
value: 18.333
- type: recall_at_1
value: 63.510999999999996
- type: recall_at_10
value: 87.98899999999999
- type: recall_at_100
value: 96.5
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 77.86699999999999
- type: recall_at_5
value: 82.73899999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.78514851485149
- type: cos_sim_ap
value: 94.94214383862038
- type: cos_sim_f1
value: 89.02255639097744
- type: cos_sim_precision
value: 89.2462311557789
- type: cos_sim_recall
value: 88.8
- type: dot_accuracy
value: 99.78217821782178
- type: dot_ap
value: 94.69965247836805
- type: dot_f1
value: 88.78695208970439
- type: dot_precision
value: 90.54054054054053
- type: dot_recall
value: 87.1
- type: euclidean_accuracy
value: 99.78118811881188
- type: euclidean_ap
value: 94.9865187695411
- type: euclidean_f1
value: 88.99950223992036
- type: euclidean_precision
value: 88.60257680872151
- type: euclidean_recall
value: 89.4
- type: manhattan_accuracy
value: 99.78811881188119
- type: manhattan_ap
value: 95.0021236766459
- type: manhattan_f1
value: 89.12071535022356
- type: manhattan_precision
value: 88.54886475814413
- type: manhattan_recall
value: 89.7
- type: max_accuracy
value: 99.78811881188119
- type: max_ap
value: 95.0021236766459
- type: max_f1
value: 89.12071535022356
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.93190546593995
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 37.602808534760655
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.29214480978073
- type: mrr
value: 53.123169722434426
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.967800769650022
- type: cos_sim_spearman
value: 31.168490040206926
- type: dot_pearson
value: 30.888603021128553
- type: dot_spearman
value: 31.028241262520385
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22300000000000003
- type: map_at_10
value: 1.781
- type: map_at_100
value: 9.905999999999999
- type: map_at_1000
value: 23.455000000000002
- type: map_at_3
value: 0.569
- type: map_at_5
value: 0.918
- type: mrr_at_1
value: 84.0
- type: mrr_at_10
value: 91.067
- type: mrr_at_100
value: 91.067
- type: mrr_at_1000
value: 91.067
- type: mrr_at_3
value: 90.667
- type: mrr_at_5
value: 91.067
- type: ndcg_at_1
value: 78.0
- type: ndcg_at_10
value: 73.13499999999999
- type: ndcg_at_100
value: 55.32
- type: ndcg_at_1000
value: 49.532
- type: ndcg_at_3
value: 73.715
- type: ndcg_at_5
value: 72.74199999999999
- type: precision_at_1
value: 84.0
- type: precision_at_10
value: 78.8
- type: precision_at_100
value: 56.32
- type: precision_at_1000
value: 21.504
- type: precision_at_3
value: 77.333
- type: precision_at_5
value: 78.0
- type: recall_at_1
value: 0.22300000000000003
- type: recall_at_10
value: 2.049
- type: recall_at_100
value: 13.553
- type: recall_at_1000
value: 46.367999999999995
- type: recall_at_3
value: 0.604
- type: recall_at_5
value: 1.015
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 3.0380000000000003
- type: map_at_10
value: 10.188
- type: map_at_100
value: 16.395
- type: map_at_1000
value: 18.024
- type: map_at_3
value: 6.236
- type: map_at_5
value: 7.276000000000001
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 46.292
- type: mrr_at_100
value: 47.446
- type: mrr_at_1000
value: 47.446
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 44.32
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 25.219
- type: ndcg_at_100
value: 37.802
- type: ndcg_at_1000
value: 49.274
- type: ndcg_at_3
value: 28.605999999999998
- type: ndcg_at_5
value: 26.21
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 21.837
- type: precision_at_100
value: 7.776
- type: precision_at_1000
value: 1.522
- type: precision_at_3
value: 28.571
- type: precision_at_5
value: 25.306
- type: recall_at_1
value: 3.0380000000000003
- type: recall_at_10
value: 16.298000000000002
- type: recall_at_100
value: 48.712
- type: recall_at_1000
value: 83.16799999999999
- type: recall_at_3
value: 7.265000000000001
- type: recall_at_5
value: 9.551
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 83.978
- type: ap
value: 24.751887949330015
- type: f1
value: 66.8685134049279
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.573288058856825
- type: f1
value: 61.973261751726604
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.75483298792469
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.36824223639506
- type: cos_sim_ap
value: 75.53126388573047
- type: cos_sim_f1
value: 67.9912831688245
- type: cos_sim_precision
value: 66.11817501869858
- type: cos_sim_recall
value: 69.9736147757256
- type: dot_accuracy
value: 86.39804494248078
- type: dot_ap
value: 75.27598891718046
- type: dot_f1
value: 67.91146284159763
- type: dot_precision
value: 63.90505003490807
- type: dot_recall
value: 72.45382585751979
- type: euclidean_accuracy
value: 86.36228169517793
- type: euclidean_ap
value: 75.51438087434647
- type: euclidean_f1
value: 68.02370523061066
- type: euclidean_precision
value: 66.46525679758308
- type: euclidean_recall
value: 69.65699208443272
- type: manhattan_accuracy
value: 86.46361089586935
- type: manhattan_ap
value: 75.50800785730111
- type: manhattan_f1
value: 67.9220437187253
- type: manhattan_precision
value: 67.79705573080967
- type: manhattan_recall
value: 68.04749340369392
- type: max_accuracy
value: 86.46361089586935
- type: max_ap
value: 75.53126388573047
- type: max_f1
value: 68.02370523061066
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.80350836341057
- type: cos_sim_ap
value: 85.51101933260743
- type: cos_sim_f1
value: 77.9152271629704
- type: cos_sim_precision
value: 75.27815662910056
- type: cos_sim_recall
value: 80.74376347397599
- type: dot_accuracy
value: 88.84425815966158
- type: dot_ap
value: 85.49726945962519
- type: dot_f1
value: 77.94445269567801
- type: dot_precision
value: 75.27251864601261
- type: dot_recall
value: 80.81305820757623
- type: euclidean_accuracy
value: 88.80350836341057
- type: euclidean_ap
value: 85.4882880790211
- type: euclidean_f1
value: 77.87063284615103
- type: euclidean_precision
value: 74.61022927689595
- type: euclidean_recall
value: 81.42901139513397
- type: manhattan_accuracy
value: 88.7161873714441
- type: manhattan_ap
value: 85.45753871906821
- type: manhattan_f1
value: 77.8686401480111
- type: manhattan_precision
value: 74.95903683123174
- type: manhattan_recall
value: 81.01324299353249
- type: max_accuracy
value: 88.84425815966158
- type: max_ap
value: 85.51101933260743
- type: max_f1
value: 77.94445269567801
---
<!-- **English** | [中文](./README_zh.md) -->
# gte-base-en-v1.5
We introduce `gte-v1.5` series, upgraded `gte` embeddings that support the context length of up to **8192**, while further enhancing model performance.
The models are built upon the `transformer++` encoder [backbone](https://huggingface.co/Alibaba-NLP/new-impl) (BERT + RoPE + GLU).
The `gte-v1.5` series achieve state-of-the-art scores on the MTEB benchmark within the same model size category and prodvide competitive on the LoCo long-context retrieval tests (refer to [Evaluation](#evaluation)).
We also present the [`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct),
a SOTA instruction-tuned multi-lingual embedding model that ranked 2nd in MTEB and 1st in C-MTEB.
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Institute for Intelligent Computing, Alibaba Group
- **Model type:** Text Embeddings
- **Paper:** [mGTE: Generalized Long-Context Text Representation and Reranking
Models for Multilingual Text Retrieval](https://arxiv.org/pdf/2407.19669)
<!-- - **Demo [optional]:** [More Information Needed] -->
### Model list
| Models | Language | Model Size | Max Seq. Length | Dimension | MTEB-en | LoCo |
|:-----: | :-----: |:-----: |:-----: |:-----: | :-----: | :-----: |
|[`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct)| Multiple | 7720 | 32768 | 4096 | 67.34 | 87.57 |
|[`gte-large-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 434 | 8192 | 1024 | 65.39 | 86.71 |
|[`gte-base-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 137 | 8192 | 768 | 64.11 | 87.44 |
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# Requires transformers>=4.36.0
import torch.nn.functional as F
from transformers import AutoModel, AutoTokenizer
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
model_path = 'Alibaba-NLP/gte-base-en-v1.5'
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=8192, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = outputs.last_hidden_state[:, 0]
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
**It is recommended to install xformers and enable unpadding for acceleration, refer to [enable-unpadding-and-xformers](https://huggingface.co/Alibaba-NLP/new-impl#recommendation-enable-unpadding-and-acceleration-with-xformers).**
Use with `sentence-transformers`:
```python
# Requires sentence_transformers>=2.7.0
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('Alibaba-NLP/gte-base-en-v1.5', trust_remote_code=True)
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
Use with `transformers.js`:
```js
// npm i @xenova/transformers
import { pipeline, dot } from '@xenova/transformers';
// Create feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Alibaba-NLP/gte-base-en-v1.5', {
quantized: false, // Comment out this line to use the quantized version
});
// Generate sentence embeddings
const sentences = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });
// Compute similarity scores
const [source_embeddings, ...document_embeddings ] = output.tolist();
const similarities = document_embeddings.map(x => 100 * dot(source_embeddings, x));
console.log(similarities); // [34.504930869007296, 64.03973265120138, 19.520042686034362]
```
Use with infinity:
[Infinity](https://github.com/michaelfeil/infinity) is a MIT licensed server for OpenAI-compatible deployment.
```
docker run --gpus all -v $PWD/data:/app/.cache -p "7997":"7997" \
michaelf34/infinity:0.0.68 \
v2 --model-id Alibaba-NLP/gte-base-en-v1.5 --revision "4c742dc2b781e4ab062a4a77f4f7cbad4bdee970" --dtype bfloat16 --batch-size 32 --device cuda --engine torch --port 7997
```
## Training Details
### Training Data
- Masked language modeling (MLM): `c4-en`
- Weak-supervised contrastive pre-training (CPT): [GTE](https://arxiv.org/pdf/2308.03281.pdf) pre-training data
- Supervised contrastive fine-tuning: [GTE](https://arxiv.org/pdf/2308.03281.pdf) fine-tuning data
### Training Procedure
To enable the backbone model to support a context length of 8192, we adopted a multi-stage training strategy.
The model first undergoes preliminary MLM pre-training on shorter lengths.
And then, we resample the data, reducing the proportion of short texts, and continue the MLM pre-training.
The entire training process is as follows:
- MLM-2048: lr 5e-4, mlm_probability 0.3, batch_size 4096, num_steps 70000, rope_base 10000
- [MLM-8192](https://huggingface.co/Alibaba-NLP/gte-en-mlm-base): lr 5e-5, mlm_probability 0.3, batch_size 1024, num_steps 20000, rope_base 500000
- CPT: max_len 512, lr 2e-4, batch_size 32768, num_steps 100000
- Fine-tuning: TODO
## Evaluation
### MTEB
The results of other models are retrieved from [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
The gte evaluation setting: `mteb==1.2.0, fp16 auto mix precision, max_length=8192`, and set ntk scaling factor to 2 (equivalent to rope_base * 2).
| Model Name | Param Size (M) | Dimension | Sequence Length | Average (56) | Class. (12) | Clust. (11) | Pair Class. (3) | Reran. (4) | Retr. (15) | STS (10) | Summ. (1) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**gte-large-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 434 | 1024 | 8192 | **65.39** | 77.75 | 47.95 | 84.63 | 58.50 | 57.91 | 81.43 | 30.91 |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 335 | 1024 | 512 | 64.68 | 75.64 | 46.71 | 87.2 | 60.11 | 54.39 | 85 | 32.71 |
| [multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) | 560 | 1024 | 514 | 64.41 | 77.56 | 47.1 | 86.19 | 58.58 | 52.47 | 84.78 | 30.39 |
| [bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)| 335 | 1024 | 512 | 64.23 | 75.97 | 46.08 | 87.12 | 60.03 | 54.29 | 83.11 | 31.61 |
| [**gte-base-en-v1.5**](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | 137 | 768 | 8192 | **64.11** | 77.17 | 46.82 | 85.33 | 57.66 | 54.09 | 81.97 | 31.17 |
| [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)| 109 | 768 | 512 | 63.55 | 75.53 | 45.77 | 86.55 | 58.86 | 53.25 | 82.4 | 31.07 |
### LoCo
| Model Name | Dimension | Sequence Length | Average (5) | QsmsumRetrieval | SummScreenRetrieval | QasperAbastractRetrieval | QasperTitleRetrieval | GovReportRetrieval |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [gte-qwen1.5-7b](https://huggingface.co/Alibaba-NLP/gte-qwen1.5-7b) | 4096 | 32768 | 87.57 | 49.37 | 93.10 | 99.67 | 97.54 | 98.21 |
| [gte-large-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-v1.5) |1024 | 8192 | 86.71 | 44.55 | 92.61 | 99.82 | 97.81 | 98.74 |
| [gte-base-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-v1.5) | 768 | 8192 | 87.44 | 49.91 | 91.78 | 99.82 | 97.13 | 98.58 |
## Citation
If you find our paper or models helpful, please consider citing them as follows:
```
@misc{zhang2024mgte,
title={mGTE: Generalized Long-Context Text Representation and Reranking Models for Multilingual Text Retrieval},
author={Xin Zhang and Yanzhao Zhang and Dingkun Long and Wen Xie and Ziqi Dai and Jialong Tang and Huan Lin and Baosong Yang and Pengjun Xie and Fei Huang and Meishan Zhang and Wenjie Li and Min Zhang},
year={2024},
eprint={2407.19669},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.19669},
}
@misc{li2023gte,
title={Towards General Text Embeddings with Multi-stage Contrastive Learning},
author={Zehan Li and Xin Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang},
year={2023},
eprint={2308.03281},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2308.03281},
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
BookingCare/multilingual-e5-base-v3-onnx-quantized | BookingCare | sentence-similarity | [
"sentence-transformers",
"onnx",
"xlm-roberta",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2402.05672",
"arxiv:2108.08787",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,729 | 1,733 | 17 | 0 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: multilingual-e5-base
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.97014925373135
- type: ap
value: 43.69351129103008
- type: f1
value: 73.38075030070492
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.7237687366167
- type: ap
value: 82.22089859962671
- type: f1
value: 69.95532758884401
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.65517241379312
- type: ap
value: 28.507918657094738
- type: f1
value: 66.84516013726119
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.32976445396146
- type: ap
value: 20.720481637566014
- type: f1
value: 59.78002763416003
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.63775
- type: ap
value: 87.22277903861716
- type: f1
value: 90.60378636386807
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.546
- type: f1
value: 44.05666638370923
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.828
- type: f1
value: 41.2710255644252
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.534
- type: f1
value: 39.820743174270326
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 39.684
- type: f1
value: 39.11052682815307
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.436
- type: f1
value: 37.07082931930871
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.226000000000006
- type: f1
value: 36.65372077739185
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.831000000000003
- type: map_at_10
value: 36.42
- type: map_at_100
value: 37.699
- type: map_at_1000
value: 37.724000000000004
- type: map_at_3
value: 32.207
- type: map_at_5
value: 34.312
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 36.574
- type: mrr_at_100
value: 37.854
- type: mrr_at_1000
value: 37.878
- type: mrr_at_3
value: 32.385000000000005
- type: mrr_at_5
value: 34.48
- type: ndcg_at_1
value: 22.831000000000003
- type: ndcg_at_10
value: 44.230000000000004
- type: ndcg_at_100
value: 49.974000000000004
- type: ndcg_at_1000
value: 50.522999999999996
- type: ndcg_at_3
value: 35.363
- type: ndcg_at_5
value: 39.164
- type: precision_at_1
value: 22.831000000000003
- type: precision_at_10
value: 6.935
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.841
- type: precision_at_5
value: 10.754
- type: recall_at_1
value: 22.831000000000003
- type: recall_at_10
value: 69.346
- type: recall_at_100
value: 95.235
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 44.523
- type: recall_at_5
value: 53.769999999999996
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 40.27789869854063
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.41979463347428
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.22752045109304
- type: mrr
value: 71.51112430198303
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.71147646622866
- type: cos_sim_spearman
value: 85.059167046486
- type: euclidean_pearson
value: 75.88421613600647
- type: euclidean_spearman
value: 75.12821787150585
- type: manhattan_pearson
value: 75.22005646957604
- type: manhattan_spearman
value: 74.42880434453272
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.23799582463465
- type: f1
value: 99.12665274878218
- type: precision
value: 99.07098121085595
- type: recall
value: 99.23799582463465
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.88685890380806
- type: f1
value: 97.59336708489249
- type: precision
value: 97.44662117543473
- type: recall
value: 97.88685890380806
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.47142362313821
- type: f1
value: 97.1989377670015
- type: precision
value: 97.06384944001847
- type: recall
value: 97.47142362313821
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.4728804634018
- type: f1
value: 98.2973494821836
- type: precision
value: 98.2095839915745
- type: recall
value: 98.4728804634018
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 82.74025974025975
- type: f1
value: 82.67420447730439
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.0380848063507
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 29.45956405670166
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.122
- type: map_at_10
value: 42.03
- type: map_at_100
value: 43.364000000000004
- type: map_at_1000
value: 43.474000000000004
- type: map_at_3
value: 38.804
- type: map_at_5
value: 40.585
- type: mrr_at_1
value: 39.914
- type: mrr_at_10
value: 48.227
- type: mrr_at_100
value: 49.018
- type: mrr_at_1000
value: 49.064
- type: mrr_at_3
value: 45.994
- type: mrr_at_5
value: 47.396
- type: ndcg_at_1
value: 39.914
- type: ndcg_at_10
value: 47.825
- type: ndcg_at_100
value: 52.852
- type: ndcg_at_1000
value: 54.891
- type: ndcg_at_3
value: 43.517
- type: ndcg_at_5
value: 45.493
- type: precision_at_1
value: 39.914
- type: precision_at_10
value: 8.956
- type: precision_at_100
value: 1.388
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 20.791999999999998
- type: precision_at_5
value: 14.821000000000002
- type: recall_at_1
value: 32.122
- type: recall_at_10
value: 58.294999999999995
- type: recall_at_100
value: 79.726
- type: recall_at_1000
value: 93.099
- type: recall_at_3
value: 45.017
- type: recall_at_5
value: 51.002
- type: map_at_1
value: 29.677999999999997
- type: map_at_10
value: 38.684000000000005
- type: map_at_100
value: 39.812999999999995
- type: map_at_1000
value: 39.945
- type: map_at_3
value: 35.831
- type: map_at_5
value: 37.446
- type: mrr_at_1
value: 37.771
- type: mrr_at_10
value: 44.936
- type: mrr_at_100
value: 45.583
- type: mrr_at_1000
value: 45.634
- type: mrr_at_3
value: 42.771
- type: mrr_at_5
value: 43.994
- type: ndcg_at_1
value: 37.771
- type: ndcg_at_10
value: 44.059
- type: ndcg_at_100
value: 48.192
- type: ndcg_at_1000
value: 50.375
- type: ndcg_at_3
value: 40.172000000000004
- type: ndcg_at_5
value: 41.899
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 8.286999999999999
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.406000000000002
- type: precision_at_5
value: 13.745
- type: recall_at_1
value: 29.677999999999997
- type: recall_at_10
value: 53.071
- type: recall_at_100
value: 70.812
- type: recall_at_1000
value: 84.841
- type: recall_at_3
value: 41.016000000000005
- type: recall_at_5
value: 46.22
- type: map_at_1
value: 42.675000000000004
- type: map_at_10
value: 53.93599999999999
- type: map_at_100
value: 54.806999999999995
- type: map_at_1000
value: 54.867
- type: map_at_3
value: 50.934000000000005
- type: map_at_5
value: 52.583
- type: mrr_at_1
value: 48.339
- type: mrr_at_10
value: 57.265
- type: mrr_at_100
value: 57.873
- type: mrr_at_1000
value: 57.906
- type: mrr_at_3
value: 55.193000000000005
- type: mrr_at_5
value: 56.303000000000004
- type: ndcg_at_1
value: 48.339
- type: ndcg_at_10
value: 59.19799999999999
- type: ndcg_at_100
value: 62.743
- type: ndcg_at_1000
value: 63.99399999999999
- type: ndcg_at_3
value: 54.367
- type: ndcg_at_5
value: 56.548
- type: precision_at_1
value: 48.339
- type: precision_at_10
value: 9.216000000000001
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.72
- type: precision_at_5
value: 16.025
- type: recall_at_1
value: 42.675000000000004
- type: recall_at_10
value: 71.437
- type: recall_at_100
value: 86.803
- type: recall_at_1000
value: 95.581
- type: recall_at_3
value: 58.434
- type: recall_at_5
value: 63.754
- type: map_at_1
value: 23.518
- type: map_at_10
value: 30.648999999999997
- type: map_at_100
value: 31.508999999999997
- type: map_at_1000
value: 31.604
- type: map_at_3
value: 28.247
- type: map_at_5
value: 29.65
- type: mrr_at_1
value: 25.650000000000002
- type: mrr_at_10
value: 32.771
- type: mrr_at_100
value: 33.554
- type: mrr_at_1000
value: 33.629999999999995
- type: mrr_at_3
value: 30.433
- type: mrr_at_5
value: 31.812
- type: ndcg_at_1
value: 25.650000000000002
- type: ndcg_at_10
value: 34.929
- type: ndcg_at_100
value: 39.382
- type: ndcg_at_1000
value: 41.913
- type: ndcg_at_3
value: 30.292
- type: ndcg_at_5
value: 32.629999999999995
- type: precision_at_1
value: 25.650000000000002
- type: precision_at_10
value: 5.311
- type: precision_at_100
value: 0.792
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 12.58
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 23.518
- type: recall_at_10
value: 46.19
- type: recall_at_100
value: 67.123
- type: recall_at_1000
value: 86.442
- type: recall_at_3
value: 33.678000000000004
- type: recall_at_5
value: 39.244
- type: map_at_1
value: 15.891
- type: map_at_10
value: 22.464000000000002
- type: map_at_100
value: 23.483
- type: map_at_1000
value: 23.613
- type: map_at_3
value: 20.080000000000002
- type: map_at_5
value: 21.526
- type: mrr_at_1
value: 20.025000000000002
- type: mrr_at_10
value: 26.712999999999997
- type: mrr_at_100
value: 27.650000000000002
- type: mrr_at_1000
value: 27.737000000000002
- type: mrr_at_3
value: 24.274
- type: mrr_at_5
value: 25.711000000000002
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 27.028999999999996
- type: ndcg_at_100
value: 32.064
- type: ndcg_at_1000
value: 35.188
- type: ndcg_at_3
value: 22.512999999999998
- type: ndcg_at_5
value: 24.89
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 4.776
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 10.531
- type: precision_at_5
value: 7.811
- type: recall_at_1
value: 15.891
- type: recall_at_10
value: 37.261
- type: recall_at_100
value: 59.12
- type: recall_at_1000
value: 81.356
- type: recall_at_3
value: 24.741
- type: recall_at_5
value: 30.753999999999998
- type: map_at_1
value: 27.544
- type: map_at_10
value: 36.283
- type: map_at_100
value: 37.467
- type: map_at_1000
value: 37.574000000000005
- type: map_at_3
value: 33.528999999999996
- type: map_at_5
value: 35.028999999999996
- type: mrr_at_1
value: 34.166999999999994
- type: mrr_at_10
value: 41.866
- type: mrr_at_100
value: 42.666
- type: mrr_at_1000
value: 42.716
- type: mrr_at_3
value: 39.541
- type: mrr_at_5
value: 40.768
- type: ndcg_at_1
value: 34.166999999999994
- type: ndcg_at_10
value: 41.577
- type: ndcg_at_100
value: 46.687
- type: ndcg_at_1000
value: 48.967
- type: ndcg_at_3
value: 37.177
- type: ndcg_at_5
value: 39.097
- type: precision_at_1
value: 34.166999999999994
- type: precision_at_10
value: 7.420999999999999
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 17.291999999999998
- type: precision_at_5
value: 12.166
- type: recall_at_1
value: 27.544
- type: recall_at_10
value: 51.99399999999999
- type: recall_at_100
value: 73.738
- type: recall_at_1000
value: 89.33
- type: recall_at_3
value: 39.179
- type: recall_at_5
value: 44.385999999999996
- type: map_at_1
value: 26.661
- type: map_at_10
value: 35.475
- type: map_at_100
value: 36.626999999999995
- type: map_at_1000
value: 36.741
- type: map_at_3
value: 32.818000000000005
- type: map_at_5
value: 34.397
- type: mrr_at_1
value: 32.647999999999996
- type: mrr_at_10
value: 40.784
- type: mrr_at_100
value: 41.602
- type: mrr_at_1000
value: 41.661
- type: mrr_at_3
value: 38.68
- type: mrr_at_5
value: 39.838
- type: ndcg_at_1
value: 32.647999999999996
- type: ndcg_at_10
value: 40.697
- type: ndcg_at_100
value: 45.799
- type: ndcg_at_1000
value: 48.235
- type: ndcg_at_3
value: 36.516
- type: ndcg_at_5
value: 38.515
- type: precision_at_1
value: 32.647999999999996
- type: precision_at_10
value: 7.202999999999999
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 17.314
- type: precision_at_5
value: 12.145999999999999
- type: recall_at_1
value: 26.661
- type: recall_at_10
value: 50.995000000000005
- type: recall_at_100
value: 73.065
- type: recall_at_1000
value: 89.781
- type: recall_at_3
value: 39.073
- type: recall_at_5
value: 44.395
- type: map_at_1
value: 25.946583333333333
- type: map_at_10
value: 33.79725
- type: map_at_100
value: 34.86408333333333
- type: map_at_1000
value: 34.9795
- type: map_at_3
value: 31.259999999999998
- type: map_at_5
value: 32.71541666666666
- type: mrr_at_1
value: 30.863749999999996
- type: mrr_at_10
value: 37.99183333333333
- type: mrr_at_100
value: 38.790499999999994
- type: mrr_at_1000
value: 38.85575000000001
- type: mrr_at_3
value: 35.82083333333333
- type: mrr_at_5
value: 37.07533333333333
- type: ndcg_at_1
value: 30.863749999999996
- type: ndcg_at_10
value: 38.52141666666667
- type: ndcg_at_100
value: 43.17966666666667
- type: ndcg_at_1000
value: 45.64608333333333
- type: ndcg_at_3
value: 34.333000000000006
- type: ndcg_at_5
value: 36.34975
- type: precision_at_1
value: 30.863749999999996
- type: precision_at_10
value: 6.598999999999999
- type: precision_at_100
value: 1.0502500000000001
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 15.557583333333334
- type: precision_at_5
value: 11.020000000000001
- type: recall_at_1
value: 25.946583333333333
- type: recall_at_10
value: 48.36991666666666
- type: recall_at_100
value: 69.02408333333334
- type: recall_at_1000
value: 86.43858333333331
- type: recall_at_3
value: 36.4965
- type: recall_at_5
value: 41.76258333333334
- type: map_at_1
value: 22.431
- type: map_at_10
value: 28.889
- type: map_at_100
value: 29.642000000000003
- type: map_at_1000
value: 29.742
- type: map_at_3
value: 26.998
- type: map_at_5
value: 28.172000000000004
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 31.763
- type: mrr_at_100
value: 32.443
- type: mrr_at_1000
value: 32.531
- type: mrr_at_3
value: 29.959000000000003
- type: mrr_at_5
value: 31.063000000000002
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 32.586999999999996
- type: ndcg_at_100
value: 36.5
- type: ndcg_at_1000
value: 39.133
- type: ndcg_at_3
value: 29.25
- type: ndcg_at_5
value: 31.023
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 4.954
- type: precision_at_100
value: 0.747
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.741999999999999
- type: recall_at_1
value: 22.431
- type: recall_at_10
value: 41.134
- type: recall_at_100
value: 59.28600000000001
- type: recall_at_1000
value: 78.857
- type: recall_at_3
value: 31.926
- type: recall_at_5
value: 36.335
- type: map_at_1
value: 17.586
- type: map_at_10
value: 23.304
- type: map_at_100
value: 24.159
- type: map_at_1000
value: 24.281
- type: map_at_3
value: 21.316
- type: map_at_5
value: 22.383
- type: mrr_at_1
value: 21.645
- type: mrr_at_10
value: 27.365000000000002
- type: mrr_at_100
value: 28.108
- type: mrr_at_1000
value: 28.192
- type: mrr_at_3
value: 25.482
- type: mrr_at_5
value: 26.479999999999997
- type: ndcg_at_1
value: 21.645
- type: ndcg_at_10
value: 27.306
- type: ndcg_at_100
value: 31.496000000000002
- type: ndcg_at_1000
value: 34.53
- type: ndcg_at_3
value: 23.73
- type: ndcg_at_5
value: 25.294
- type: precision_at_1
value: 21.645
- type: precision_at_10
value: 4.797
- type: precision_at_100
value: 0.8059999999999999
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 10.850999999999999
- type: precision_at_5
value: 7.736
- type: recall_at_1
value: 17.586
- type: recall_at_10
value: 35.481
- type: recall_at_100
value: 54.534000000000006
- type: recall_at_1000
value: 76.456
- type: recall_at_3
value: 25.335
- type: recall_at_5
value: 29.473
- type: map_at_1
value: 25.095
- type: map_at_10
value: 32.374
- type: map_at_100
value: 33.537
- type: map_at_1000
value: 33.634
- type: map_at_3
value: 30.089
- type: map_at_5
value: 31.433
- type: mrr_at_1
value: 29.198
- type: mrr_at_10
value: 36.01
- type: mrr_at_100
value: 37.022
- type: mrr_at_1000
value: 37.083
- type: mrr_at_3
value: 33.94
- type: mrr_at_5
value: 35.148
- type: ndcg_at_1
value: 29.198
- type: ndcg_at_10
value: 36.729
- type: ndcg_at_100
value: 42.114000000000004
- type: ndcg_at_1000
value: 44.592
- type: ndcg_at_3
value: 32.644
- type: ndcg_at_5
value: 34.652
- type: precision_at_1
value: 29.198
- type: precision_at_10
value: 5.970000000000001
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 14.396999999999998
- type: precision_at_5
value: 10.093
- type: recall_at_1
value: 25.095
- type: recall_at_10
value: 46.392
- type: recall_at_100
value: 69.706
- type: recall_at_1000
value: 87.738
- type: recall_at_3
value: 35.303000000000004
- type: recall_at_5
value: 40.441
- type: map_at_1
value: 26.857999999999997
- type: map_at_10
value: 34.066
- type: map_at_100
value: 35.671
- type: map_at_1000
value: 35.881
- type: map_at_3
value: 31.304
- type: map_at_5
value: 32.885
- type: mrr_at_1
value: 32.411
- type: mrr_at_10
value: 38.987
- type: mrr_at_100
value: 39.894
- type: mrr_at_1000
value: 39.959
- type: mrr_at_3
value: 36.626999999999995
- type: mrr_at_5
value: 38.011
- type: ndcg_at_1
value: 32.411
- type: ndcg_at_10
value: 39.208
- type: ndcg_at_100
value: 44.626
- type: ndcg_at_1000
value: 47.43
- type: ndcg_at_3
value: 35.091
- type: ndcg_at_5
value: 37.119
- type: precision_at_1
value: 32.411
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 16.14
- type: precision_at_5
value: 11.976
- type: recall_at_1
value: 26.857999999999997
- type: recall_at_10
value: 47.407
- type: recall_at_100
value: 72.236
- type: recall_at_1000
value: 90.77
- type: recall_at_3
value: 35.125
- type: recall_at_5
value: 40.522999999999996
- type: map_at_1
value: 21.3
- type: map_at_10
value: 27.412999999999997
- type: map_at_100
value: 28.29
- type: map_at_1000
value: 28.398
- type: map_at_3
value: 25.169999999999998
- type: map_at_5
value: 26.496
- type: mrr_at_1
value: 23.29
- type: mrr_at_10
value: 29.215000000000003
- type: mrr_at_100
value: 30.073
- type: mrr_at_1000
value: 30.156
- type: mrr_at_3
value: 26.956000000000003
- type: mrr_at_5
value: 28.38
- type: ndcg_at_1
value: 23.29
- type: ndcg_at_10
value: 31.113000000000003
- type: ndcg_at_100
value: 35.701
- type: ndcg_at_1000
value: 38.505
- type: ndcg_at_3
value: 26.727
- type: ndcg_at_5
value: 29.037000000000003
- type: precision_at_1
value: 23.29
- type: precision_at_10
value: 4.787
- type: precision_at_100
value: 0.763
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 11.091
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 21.3
- type: recall_at_10
value: 40.782000000000004
- type: recall_at_100
value: 62.13999999999999
- type: recall_at_1000
value: 83.012
- type: recall_at_3
value: 29.131
- type: recall_at_5
value: 34.624
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.631
- type: map_at_10
value: 16.634999999999998
- type: map_at_100
value: 18.23
- type: map_at_1000
value: 18.419
- type: map_at_3
value: 13.66
- type: map_at_5
value: 15.173
- type: mrr_at_1
value: 21.368000000000002
- type: mrr_at_10
value: 31.56
- type: mrr_at_100
value: 32.58
- type: mrr_at_1000
value: 32.633
- type: mrr_at_3
value: 28.241
- type: mrr_at_5
value: 30.225
- type: ndcg_at_1
value: 21.368000000000002
- type: ndcg_at_10
value: 23.855999999999998
- type: ndcg_at_100
value: 30.686999999999998
- type: ndcg_at_1000
value: 34.327000000000005
- type: ndcg_at_3
value: 18.781
- type: ndcg_at_5
value: 20.73
- type: precision_at_1
value: 21.368000000000002
- type: precision_at_10
value: 7.564
- type: precision_at_100
value: 1.496
- type: precision_at_1000
value: 0.217
- type: precision_at_3
value: 13.876
- type: precision_at_5
value: 11.062
- type: recall_at_1
value: 9.631
- type: recall_at_10
value: 29.517
- type: recall_at_100
value: 53.452
- type: recall_at_1000
value: 74.115
- type: recall_at_3
value: 17.605999999999998
- type: recall_at_5
value: 22.505
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.885
- type: map_at_10
value: 18.798000000000002
- type: map_at_100
value: 26.316
- type: map_at_1000
value: 27.869
- type: map_at_3
value: 13.719000000000001
- type: map_at_5
value: 15.716
- type: mrr_at_1
value: 66
- type: mrr_at_10
value: 74.263
- type: mrr_at_100
value: 74.519
- type: mrr_at_1000
value: 74.531
- type: mrr_at_3
value: 72.458
- type: mrr_at_5
value: 73.321
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.355999999999995
- type: ndcg_at_100
value: 44.366
- type: ndcg_at_1000
value: 51.771
- type: ndcg_at_3
value: 45.195
- type: ndcg_at_5
value: 42.187000000000005
- type: precision_at_1
value: 66
- type: precision_at_10
value: 31.75
- type: precision_at_100
value: 10.11
- type: precision_at_1000
value: 1.9800000000000002
- type: precision_at_3
value: 48.167
- type: precision_at_5
value: 40.050000000000004
- type: recall_at_1
value: 8.885
- type: recall_at_10
value: 24.471999999999998
- type: recall_at_100
value: 49.669000000000004
- type: recall_at_1000
value: 73.383
- type: recall_at_3
value: 14.872
- type: recall_at_5
value: 18.262999999999998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.18
- type: f1
value: 40.26878691789978
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 62.751999999999995
- type: map_at_10
value: 74.131
- type: map_at_100
value: 74.407
- type: map_at_1000
value: 74.423
- type: map_at_3
value: 72.329
- type: map_at_5
value: 73.555
- type: mrr_at_1
value: 67.282
- type: mrr_at_10
value: 78.292
- type: mrr_at_100
value: 78.455
- type: mrr_at_1000
value: 78.458
- type: mrr_at_3
value: 76.755
- type: mrr_at_5
value: 77.839
- type: ndcg_at_1
value: 67.282
- type: ndcg_at_10
value: 79.443
- type: ndcg_at_100
value: 80.529
- type: ndcg_at_1000
value: 80.812
- type: ndcg_at_3
value: 76.281
- type: ndcg_at_5
value: 78.235
- type: precision_at_1
value: 67.282
- type: precision_at_10
value: 10.078
- type: precision_at_100
value: 1.082
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 30.178
- type: precision_at_5
value: 19.232
- type: recall_at_1
value: 62.751999999999995
- type: recall_at_10
value: 91.521
- type: recall_at_100
value: 95.997
- type: recall_at_1000
value: 97.775
- type: recall_at_3
value: 83.131
- type: recall_at_5
value: 87.93299999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.861
- type: map_at_10
value: 30.252000000000002
- type: map_at_100
value: 32.082
- type: map_at_1000
value: 32.261
- type: map_at_3
value: 25.909
- type: map_at_5
value: 28.296
- type: mrr_at_1
value: 37.346000000000004
- type: mrr_at_10
value: 45.802
- type: mrr_at_100
value: 46.611999999999995
- type: mrr_at_1000
value: 46.659
- type: mrr_at_3
value: 43.056
- type: mrr_at_5
value: 44.637
- type: ndcg_at_1
value: 37.346000000000004
- type: ndcg_at_10
value: 38.169
- type: ndcg_at_100
value: 44.864
- type: ndcg_at_1000
value: 47.974
- type: ndcg_at_3
value: 33.619
- type: ndcg_at_5
value: 35.317
- type: precision_at_1
value: 37.346000000000004
- type: precision_at_10
value: 10.693999999999999
- type: precision_at_100
value: 1.775
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 22.325
- type: precision_at_5
value: 16.852
- type: recall_at_1
value: 18.861
- type: recall_at_10
value: 45.672000000000004
- type: recall_at_100
value: 70.60499999999999
- type: recall_at_1000
value: 89.216
- type: recall_at_3
value: 30.361
- type: recall_at_5
value: 36.998999999999995
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.852999999999994
- type: map_at_10
value: 59.961
- type: map_at_100
value: 60.78
- type: map_at_1000
value: 60.843
- type: map_at_3
value: 56.39999999999999
- type: map_at_5
value: 58.646
- type: mrr_at_1
value: 75.70599999999999
- type: mrr_at_10
value: 82.321
- type: mrr_at_100
value: 82.516
- type: mrr_at_1000
value: 82.525
- type: mrr_at_3
value: 81.317
- type: mrr_at_5
value: 81.922
- type: ndcg_at_1
value: 75.70599999999999
- type: ndcg_at_10
value: 68.557
- type: ndcg_at_100
value: 71.485
- type: ndcg_at_1000
value: 72.71600000000001
- type: ndcg_at_3
value: 63.524
- type: ndcg_at_5
value: 66.338
- type: precision_at_1
value: 75.70599999999999
- type: precision_at_10
value: 14.463000000000001
- type: precision_at_100
value: 1.677
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 40.806
- type: precision_at_5
value: 26.709
- type: recall_at_1
value: 37.852999999999994
- type: recall_at_10
value: 72.316
- type: recall_at_100
value: 83.842
- type: recall_at_1000
value: 91.999
- type: recall_at_3
value: 61.209
- type: recall_at_5
value: 66.77199999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.46039999999999
- type: ap
value: 79.9812521351881
- type: f1
value: 85.31722909702084
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.704
- type: map_at_10
value: 35.329
- type: map_at_100
value: 36.494
- type: map_at_1000
value: 36.541000000000004
- type: map_at_3
value: 31.476
- type: map_at_5
value: 33.731
- type: mrr_at_1
value: 23.294999999999998
- type: mrr_at_10
value: 35.859
- type: mrr_at_100
value: 36.968
- type: mrr_at_1000
value: 37.008
- type: mrr_at_3
value: 32.085
- type: mrr_at_5
value: 34.299
- type: ndcg_at_1
value: 23.324
- type: ndcg_at_10
value: 42.274
- type: ndcg_at_100
value: 47.839999999999996
- type: ndcg_at_1000
value: 48.971
- type: ndcg_at_3
value: 34.454
- type: ndcg_at_5
value: 38.464
- type: precision_at_1
value: 23.324
- type: precision_at_10
value: 6.648
- type: precision_at_100
value: 0.9440000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.674999999999999
- type: precision_at_5
value: 10.850999999999999
- type: recall_at_1
value: 22.704
- type: recall_at_10
value: 63.660000000000004
- type: recall_at_100
value: 89.29899999999999
- type: recall_at_1000
value: 97.88900000000001
- type: recall_at_3
value: 42.441
- type: recall_at_5
value: 52.04
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.1326949384405
- type: f1
value: 92.89743579612082
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.62524654832347
- type: f1
value: 88.65106082263151
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.59039359573046
- type: f1
value: 90.31532892105662
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.21046038208581
- type: f1
value: 86.41459529813113
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.3180351380423
- type: f1
value: 86.71383078226444
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.24231464737792
- type: f1
value: 86.31845567592403
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.27131782945736
- type: f1
value: 57.52079940417103
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.2341504649197
- type: f1
value: 51.349951558039244
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.27418278852569
- type: f1
value: 50.1714985749095
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.68243031631694
- type: f1
value: 50.1066160836192
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.2362854069559
- type: f1
value: 48.821279948766424
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.71428571428571
- type: f1
value: 53.94611389496195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.97646267652992
- type: f1
value: 57.26797883561521
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.65501008742435
- type: f1
value: 50.416258382177034
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.45796906523201
- type: f1
value: 53.306690547422185
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.59246805648957
- type: f1
value: 59.818381969051494
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.126429051782104
- type: f1
value: 58.25993593933026
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.057162071284466
- type: f1
value: 46.96095728790911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.64425016812375
- type: f1
value: 62.858291698755764
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.08944182918628
- type: f1
value: 62.44639030604241
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.68056489576328
- type: f1
value: 61.775326758789504
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.11163416274377
- type: f1
value: 69.70789096927015
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.40282447881641
- type: f1
value: 66.38492065671895
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.24613315400134
- type: f1
value: 64.3348019501336
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.78345662407531
- type: f1
value: 62.21279452354622
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.9455279085407
- type: f1
value: 65.48193124964094
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.05110961667788
- type: f1
value: 58.097856564684534
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.95292535305985
- type: f1
value: 62.09182174767901
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.97310020174848
- type: f1
value: 61.14252567730396
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.08069939475453
- type: f1
value: 57.044041742492034
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.63752521856085
- type: f1
value: 63.889340907205316
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.385339609952936
- type: f1
value: 53.449033750088304
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.93073301950234
- type: f1
value: 65.9884357824104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.94418291862812
- type: f1
value: 66.48740222583132
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.26025554808339
- type: f1
value: 50.19562815100793
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.98789509078682
- type: f1
value: 46.65788438676836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.68728984532616
- type: f1
value: 41.642419349541996
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.19300605245461
- type: f1
value: 55.8626492442437
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33826496301278
- type: f1
value: 63.89499791648792
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.33960995292536
- type: f1
value: 57.15242464180892
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.09347679892402
- type: f1
value: 59.64733214063841
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.75924680564896
- type: f1
value: 55.96585692366827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.48486886348352
- type: f1
value: 59.45143559032946
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.56422326832549
- type: f1
value: 54.96368702901926
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.18022864828512
- type: f1
value: 63.05369805040634
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.30329522528581
- type: f1
value: 64.06084612020727
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.36919973100201
- type: f1
value: 65.12154124788887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.98117014122394
- type: f1
value: 66.41847559806962
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.53799596503026
- type: f1
value: 62.17067330740817
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.01815736381977
- type: f1
value: 66.24988369607843
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.34700739744452
- type: f1
value: 59.957933424941636
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.23402824478815
- type: f1
value: 57.98836976018471
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.43849680666855
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.998655010087425
- type: f1
value: 52.83737515406804
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.71217215870882
- type: f1
value: 55.051794977833026
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.724277067921996
- type: f1
value: 56.33485571838306
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.59515803631473
- type: f1
value: 64.96772366193588
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.860793544048406
- type: f1
value: 58.148845819115394
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.40753194351043
- type: f1
value: 63.18903778054698
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.52320107599194
- type: f1
value: 58.356144563398516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.17014122394083
- type: f1
value: 63.919964062638925
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.15601882985878
- type: f1
value: 67.01451905761371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.65030262273034
- type: f1
value: 64.14420425129063
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.08742434431743
- type: f1
value: 63.044060042311756
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.52387357094821
- type: f1
value: 56.82398588814534
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.239408204438476
- type: f1
value: 61.92570286170469
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.74915938130463
- type: f1
value: 62.130740689396276
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.00336247478144
- type: f1
value: 63.71080635228055
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.837928715534645
- type: f1
value: 50.390741680320836
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.42098184263618
- type: f1
value: 71.41355113538995
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.95359784801613
- type: f1
value: 71.42699340156742
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.18157363819772
- type: f1
value: 69.74836113037671
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 76.78000685068261
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.5030262273033
- type: f1
value: 71.71620130425673
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.24546065904505
- type: f1
value: 69.07638311730359
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.12911903160726
- type: f1
value: 68.32651736539815
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195025
- type: f1
value: 71.33986549860187
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.44451916610626
- type: f1
value: 66.90192664503866
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.16274377942166
- type: f1
value: 68.01090953775066
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.75319435104237
- type: f1
value: 70.18035309201403
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.14391392064559
- type: f1
value: 61.48286540778145
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.70275722932078
- type: f1
value: 70.26164779846495
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.93813046402153
- type: f1
value: 58.8852862116525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.320107599193
- type: f1
value: 72.19836409602924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.65366509751176
- type: f1
value: 74.55188288799579
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.694014794889036
- type: f1
value: 58.11353311721067
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.37457969065231
- type: f1
value: 52.81306134311697
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.3086751849361
- type: f1
value: 45.396449765419376
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.151983860121064
- type: f1
value: 60.31762544281696
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.44788164088769
- type: f1
value: 71.68150151736367
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.81439139206455
- type: f1
value: 62.06735559105593
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.04303967720242
- type: f1
value: 66.68298851670133
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.43913920645595
- type: f1
value: 60.25605977560783
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.90316072629456
- type: f1
value: 65.1325924692381
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.63752521856086
- type: f1
value: 59.14284778039585
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.63080026899797
- type: f1
value: 70.89771864626877
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.10827168796234
- type: f1
value: 71.71954219691159
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.59515803631471
- type: f1
value: 70.05040128099003
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.83389374579691
- type: f1
value: 70.84877936562735
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.18628110289173
- type: f1
value: 68.97232927921841
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.99260255548083
- type: f1
value: 72.85139492157732
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.26227303295225
- type: f1
value: 65.08833655469431
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48621385339611
- type: f1
value: 64.43483199071298
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.14391392064559
- type: f1
value: 72.2580822579741
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.88567585743107
- type: f1
value: 58.3073765932569
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.38399462004034
- type: f1
value: 60.82139544252606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.58574310692671
- type: f1
value: 60.71443370385374
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.61398789509079
- type: f1
value: 70.99761812049401
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.73705447209146
- type: f1
value: 61.680849331794796
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.66778749159381
- type: f1
value: 71.17320646080115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.640215198386
- type: f1
value: 63.301805157015444
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.00672494956288
- type: f1
value: 70.26005548582106
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.42030934767989
- type: f1
value: 75.2074842882598
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.69266980497646
- type: f1
value: 70.94103167391192
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 28.91697191169135
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.434000079573313
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.96683513343383
- type: mrr
value: 31.967364078714834
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.5280000000000005
- type: map_at_10
value: 11.793
- type: map_at_100
value: 14.496999999999998
- type: map_at_1000
value: 15.783
- type: map_at_3
value: 8.838
- type: map_at_5
value: 10.07
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 51.531000000000006
- type: mrr_at_100
value: 52.205
- type: mrr_at_1000
value: 52.242999999999995
- type: mrr_at_3
value: 49.431999999999995
- type: mrr_at_5
value: 50.470000000000006
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 32.464999999999996
- type: ndcg_at_100
value: 28.927999999999997
- type: ndcg_at_1000
value: 37.629000000000005
- type: ndcg_at_3
value: 37.845
- type: ndcg_at_5
value: 35.147
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 23.932000000000002
- type: precision_at_100
value: 7.17
- type: precision_at_1000
value: 1.967
- type: precision_at_3
value: 35.397
- type: precision_at_5
value: 29.907
- type: recall_at_1
value: 5.5280000000000005
- type: recall_at_10
value: 15.568000000000001
- type: recall_at_100
value: 28.54
- type: recall_at_1000
value: 59.864
- type: recall_at_3
value: 9.822000000000001
- type: recall_at_5
value: 11.726
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.041000000000004
- type: map_at_10
value: 52.664
- type: map_at_100
value: 53.477
- type: map_at_1000
value: 53.505
- type: map_at_3
value: 48.510999999999996
- type: map_at_5
value: 51.036
- type: mrr_at_1
value: 41.338
- type: mrr_at_10
value: 55.071000000000005
- type: mrr_at_100
value: 55.672
- type: mrr_at_1000
value: 55.689
- type: mrr_at_3
value: 51.82
- type: mrr_at_5
value: 53.852
- type: ndcg_at_1
value: 41.338
- type: ndcg_at_10
value: 60.01800000000001
- type: ndcg_at_100
value: 63.409000000000006
- type: ndcg_at_1000
value: 64.017
- type: ndcg_at_3
value: 52.44799999999999
- type: ndcg_at_5
value: 56.571000000000005
- type: precision_at_1
value: 41.338
- type: precision_at_10
value: 9.531
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.416
- type: precision_at_5
value: 16.46
- type: recall_at_1
value: 37.041000000000004
- type: recall_at_10
value: 79.76299999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.851
- type: recall_at_3
value: 60.465
- type: recall_at_5
value: 69.906
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.952
- type: map_at_10
value: 83.758
- type: map_at_100
value: 84.406
- type: map_at_1000
value: 84.425
- type: map_at_3
value: 80.839
- type: map_at_5
value: 82.646
- type: mrr_at_1
value: 80.62
- type: mrr_at_10
value: 86.947
- type: mrr_at_100
value: 87.063
- type: mrr_at_1000
value: 87.064
- type: mrr_at_3
value: 85.96000000000001
- type: mrr_at_5
value: 86.619
- type: ndcg_at_1
value: 80.63
- type: ndcg_at_10
value: 87.64800000000001
- type: ndcg_at_100
value: 88.929
- type: ndcg_at_1000
value: 89.054
- type: ndcg_at_3
value: 84.765
- type: ndcg_at_5
value: 86.291
- type: precision_at_1
value: 80.63
- type: precision_at_10
value: 13.314
- type: precision_at_100
value: 1.525
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.1
- type: precision_at_5
value: 24.372
- type: recall_at_1
value: 69.952
- type: recall_at_10
value: 94.955
- type: recall_at_100
value: 99.38
- type: recall_at_1000
value: 99.96000000000001
- type: recall_at_3
value: 86.60600000000001
- type: recall_at_5
value: 90.997
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.41329517878427
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.171278362748666
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.213
- type: map_at_10
value: 9.895
- type: map_at_100
value: 11.776
- type: map_at_1000
value: 12.084
- type: map_at_3
value: 7.2669999999999995
- type: map_at_5
value: 8.620999999999999
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 31.112000000000002
- type: mrr_at_100
value: 32.274
- type: mrr_at_1000
value: 32.35
- type: mrr_at_3
value: 28.133000000000003
- type: mrr_at_5
value: 29.892999999999997
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.163999999999998
- type: ndcg_at_100
value: 24.738
- type: ndcg_at_1000
value: 30.316
- type: ndcg_at_3
value: 16.665
- type: ndcg_at_5
value: 14.478
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 8.74
- type: precision_at_100
value: 1.963
- type: precision_at_1000
value: 0.33
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 4.213
- type: recall_at_10
value: 17.698
- type: recall_at_100
value: 39.838
- type: recall_at_1000
value: 66.893
- type: recall_at_3
value: 9.418
- type: recall_at_5
value: 12.773000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.90453315738294
- type: cos_sim_spearman
value: 78.51197850080254
- type: euclidean_pearson
value: 80.09647123597748
- type: euclidean_spearman
value: 78.63548011514061
- type: manhattan_pearson
value: 80.10645285675231
- type: manhattan_spearman
value: 78.57861806068901
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.2616156846401
- type: cos_sim_spearman
value: 76.69713867850156
- type: euclidean_pearson
value: 77.97948563800394
- type: euclidean_spearman
value: 74.2371211567807
- type: manhattan_pearson
value: 77.69697879669705
- type: manhattan_spearman
value: 73.86529778022278
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.0293269315045
- type: cos_sim_spearman
value: 78.02555120584198
- type: euclidean_pearson
value: 78.25398100379078
- type: euclidean_spearman
value: 78.66963870599464
- type: manhattan_pearson
value: 78.14314682167348
- type: manhattan_spearman
value: 78.57692322969135
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.16989925136942
- type: cos_sim_spearman
value: 76.5996225327091
- type: euclidean_pearson
value: 77.8319003279786
- type: euclidean_spearman
value: 76.42824009468998
- type: manhattan_pearson
value: 77.69118862737736
- type: manhattan_spearman
value: 76.25568104762812
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.42012286935325
- type: cos_sim_spearman
value: 88.15654297884122
- type: euclidean_pearson
value: 87.34082819427852
- type: euclidean_spearman
value: 88.06333589547084
- type: manhattan_pearson
value: 87.25115596784842
- type: manhattan_spearman
value: 87.9559927695203
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.88222044996712
- type: cos_sim_spearman
value: 84.28476589061077
- type: euclidean_pearson
value: 83.17399758058309
- type: euclidean_spearman
value: 83.85497357244542
- type: manhattan_pearson
value: 83.0308397703786
- type: manhattan_spearman
value: 83.71554539935046
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.20682986257339
- type: cos_sim_spearman
value: 79.94567120362092
- type: euclidean_pearson
value: 79.43122480368902
- type: euclidean_spearman
value: 79.94802077264987
- type: manhattan_pearson
value: 79.32653021527081
- type: manhattan_spearman
value: 79.80961146709178
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.46578144394383
- type: cos_sim_spearman
value: 74.52496637472179
- type: euclidean_pearson
value: 72.2903807076809
- type: euclidean_spearman
value: 73.55549359771645
- type: manhattan_pearson
value: 72.09324837709393
- type: manhattan_spearman
value: 73.36743103606581
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 71.37272335116
- type: cos_sim_spearman
value: 71.26702117766037
- type: euclidean_pearson
value: 67.114829954434
- type: euclidean_spearman
value: 66.37938893947761
- type: manhattan_pearson
value: 66.79688574095246
- type: manhattan_spearman
value: 66.17292828079667
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.61016770129092
- type: cos_sim_spearman
value: 82.08515426632214
- type: euclidean_pearson
value: 80.557340361131
- type: euclidean_spearman
value: 80.37585812266175
- type: manhattan_pearson
value: 80.6782873404285
- type: manhattan_spearman
value: 80.6678073032024
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.00150745350108
- type: cos_sim_spearman
value: 87.83441972211425
- type: euclidean_pearson
value: 87.94826702308792
- type: euclidean_spearman
value: 87.46143974860725
- type: manhattan_pearson
value: 87.97560344306105
- type: manhattan_spearman
value: 87.5267102829796
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 64.76325252267235
- type: cos_sim_spearman
value: 63.32615095463905
- type: euclidean_pearson
value: 64.07920669155716
- type: euclidean_spearman
value: 61.21409893072176
- type: manhattan_pearson
value: 64.26308625680016
- type: manhattan_spearman
value: 61.2438185254079
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.82644463022595
- type: cos_sim_spearman
value: 76.50381269945073
- type: euclidean_pearson
value: 75.1328548315934
- type: euclidean_spearman
value: 75.63761139408453
- type: manhattan_pearson
value: 75.18610101241407
- type: manhattan_spearman
value: 75.30669266354164
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.49994164686832
- type: cos_sim_spearman
value: 86.73743986245549
- type: euclidean_pearson
value: 86.8272894387145
- type: euclidean_spearman
value: 85.97608491000507
- type: manhattan_pearson
value: 86.74960140396779
- type: manhattan_spearman
value: 85.79285984190273
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.58172210788469
- type: cos_sim_spearman
value: 80.17516468334607
- type: euclidean_pearson
value: 77.56537843470504
- type: euclidean_spearman
value: 77.57264627395521
- type: manhattan_pearson
value: 78.09703521695943
- type: manhattan_spearman
value: 78.15942760916954
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.7589932931751
- type: cos_sim_spearman
value: 80.15210089028162
- type: euclidean_pearson
value: 77.54135223516057
- type: euclidean_spearman
value: 77.52697996368764
- type: manhattan_pearson
value: 77.65734439572518
- type: manhattan_spearman
value: 77.77702992016121
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.16682365511267
- type: cos_sim_spearman
value: 79.25311267628506
- type: euclidean_pearson
value: 77.54882036762244
- type: euclidean_spearman
value: 77.33212935194827
- type: manhattan_pearson
value: 77.98405516064015
- type: manhattan_spearman
value: 77.85075717865719
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.10473294775917
- type: cos_sim_spearman
value: 61.82780474476838
- type: euclidean_pearson
value: 45.885111672377256
- type: euclidean_spearman
value: 56.88306351932454
- type: manhattan_pearson
value: 46.101218127323186
- type: manhattan_spearman
value: 56.80953694186333
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 45.781923079584146
- type: cos_sim_spearman
value: 55.95098449691107
- type: euclidean_pearson
value: 25.4571031323205
- type: euclidean_spearman
value: 49.859978118078935
- type: manhattan_pearson
value: 25.624938455041384
- type: manhattan_spearman
value: 49.99546185049401
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.00618133997907
- type: cos_sim_spearman
value: 66.57896677718321
- type: euclidean_pearson
value: 42.60118466388821
- type: euclidean_spearman
value: 62.8210759715209
- type: manhattan_pearson
value: 42.63446860604094
- type: manhattan_spearman
value: 62.73803068925271
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 28.460759121626943
- type: cos_sim_spearman
value: 34.13459007469131
- type: euclidean_pearson
value: 6.0917739325525195
- type: euclidean_spearman
value: 27.9947262664867
- type: manhattan_pearson
value: 6.16877864169911
- type: manhattan_spearman
value: 28.00664163971514
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.42546621771696
- type: cos_sim_spearman
value: 63.699663168970474
- type: euclidean_pearson
value: 38.12085278789738
- type: euclidean_spearman
value: 58.12329140741536
- type: manhattan_pearson
value: 37.97364549443335
- type: manhattan_spearman
value: 57.81545502318733
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 46.82241380954213
- type: cos_sim_spearman
value: 57.86569456006391
- type: euclidean_pearson
value: 31.80480070178813
- type: euclidean_spearman
value: 52.484000620130104
- type: manhattan_pearson
value: 31.952708554646097
- type: manhattan_spearman
value: 52.8560972356195
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 52.00447170498087
- type: cos_sim_spearman
value: 60.664116225735164
- type: euclidean_pearson
value: 33.87382555421702
- type: euclidean_spearman
value: 55.74649067458667
- type: manhattan_pearson
value: 33.99117246759437
- type: manhattan_spearman
value: 55.98749034923899
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.06497233105448
- type: cos_sim_spearman
value: 65.62968801135676
- type: euclidean_pearson
value: 47.482076613243905
- type: euclidean_spearman
value: 62.65137791498299
- type: manhattan_pearson
value: 47.57052626104093
- type: manhattan_spearman
value: 62.436916516613294
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.49397298562575
- type: cos_sim_spearman
value: 74.79604041187868
- type: euclidean_pearson
value: 49.661891561317795
- type: euclidean_spearman
value: 70.31535537621006
- type: manhattan_pearson
value: 49.553715741850006
- type: manhattan_spearman
value: 70.24779344636806
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.640574515348696
- type: cos_sim_spearman
value: 54.927959317689
- type: euclidean_pearson
value: 29.00139666967476
- type: euclidean_spearman
value: 41.86386566971605
- type: manhattan_pearson
value: 29.47411067730344
- type: manhattan_spearman
value: 42.337438424952786
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.14095292259312
- type: cos_sim_spearman
value: 73.99017581234789
- type: euclidean_pearson
value: 46.46304297872084
- type: euclidean_spearman
value: 60.91834114800041
- type: manhattan_pearson
value: 47.07072666338692
- type: manhattan_spearman
value: 61.70415727977926
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.27184653359575
- type: cos_sim_spearman
value: 77.76070252418626
- type: euclidean_pearson
value: 62.30586577544778
- type: euclidean_spearman
value: 75.14246629110978
- type: manhattan_pearson
value: 62.328196884927046
- type: manhattan_spearman
value: 75.1282792981433
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.59448528829957
- type: cos_sim_spearman
value: 70.37277734222123
- type: euclidean_pearson
value: 57.63145565721123
- type: euclidean_spearman
value: 66.10113048304427
- type: manhattan_pearson
value: 57.18897811586808
- type: manhattan_spearman
value: 66.5595511215901
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.37520607720838
- type: cos_sim_spearman
value: 69.92282148997948
- type: euclidean_pearson
value: 40.55768770125291
- type: euclidean_spearman
value: 55.189128944669605
- type: manhattan_pearson
value: 41.03566433468883
- type: manhattan_spearman
value: 55.61251893174558
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.791929533771835
- type: cos_sim_spearman
value: 66.45819707662093
- type: euclidean_pearson
value: 39.03686018511092
- type: euclidean_spearman
value: 56.01282695640428
- type: manhattan_pearson
value: 38.91586623619632
- type: manhattan_spearman
value: 56.69394943612747
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.82224468473866
- type: cos_sim_spearman
value: 59.467307194781164
- type: euclidean_pearson
value: 27.428459190256145
- type: euclidean_spearman
value: 60.83463107397519
- type: manhattan_pearson
value: 27.487391578496638
- type: manhattan_spearman
value: 61.281380460246496
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 16.306666792752644
- type: cos_sim_spearman
value: 39.35486427252405
- type: euclidean_pearson
value: -2.7887154897955435
- type: euclidean_spearman
value: 27.1296051831719
- type: manhattan_pearson
value: -3.202291270581297
- type: manhattan_spearman
value: 26.32895849218158
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.67006803805076
- type: cos_sim_spearman
value: 73.24670207647144
- type: euclidean_pearson
value: 46.91884681500483
- type: euclidean_spearman
value: 16.903085094570333
- type: manhattan_pearson
value: 46.88391675325812
- type: manhattan_spearman
value: 28.17180849095055
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 83.79555591223837
- type: cos_sim_spearman
value: 85.63658602085185
- type: euclidean_pearson
value: 85.22080894037671
- type: euclidean_spearman
value: 85.54113580167038
- type: manhattan_pearson
value: 85.1639505960118
- type: manhattan_spearman
value: 85.43502665436196
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.73900991689766
- type: mrr
value: 94.81624131133934
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.678000000000004
- type: map_at_10
value: 65.135
- type: map_at_100
value: 65.824
- type: map_at_1000
value: 65.852
- type: map_at_3
value: 62.736000000000004
- type: map_at_5
value: 64.411
- type: mrr_at_1
value: 58.333
- type: mrr_at_10
value: 66.5
- type: mrr_at_100
value: 67.053
- type: mrr_at_1000
value: 67.08
- type: mrr_at_3
value: 64.944
- type: mrr_at_5
value: 65.89399999999999
- type: ndcg_at_1
value: 58.333
- type: ndcg_at_10
value: 69.34700000000001
- type: ndcg_at_100
value: 72.32
- type: ndcg_at_1000
value: 73.014
- type: ndcg_at_3
value: 65.578
- type: ndcg_at_5
value: 67.738
- type: precision_at_1
value: 58.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 55.678000000000004
- type: recall_at_10
value: 80.72200000000001
- type: recall_at_100
value: 93.93299999999999
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 70.783
- type: recall_at_5
value: 75.978
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74653465346535
- type: cos_sim_ap
value: 93.01476369929063
- type: cos_sim_f1
value: 86.93009118541033
- type: cos_sim_precision
value: 88.09034907597535
- type: cos_sim_recall
value: 85.8
- type: dot_accuracy
value: 99.22970297029703
- type: dot_ap
value: 51.58725659485144
- type: dot_f1
value: 53.51351351351352
- type: dot_precision
value: 58.235294117647065
- type: dot_recall
value: 49.5
- type: euclidean_accuracy
value: 99.74356435643564
- type: euclidean_ap
value: 92.40332894384368
- type: euclidean_f1
value: 86.97838109602817
- type: euclidean_precision
value: 87.46208291203236
- type: euclidean_recall
value: 86.5
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 92.01320815721121
- type: manhattan_f1
value: 86.4135864135864
- type: manhattan_precision
value: 86.32734530938124
- type: manhattan_recall
value: 86.5
- type: max_accuracy
value: 99.74653465346535
- type: max_ap
value: 93.01476369929063
- type: max_f1
value: 86.97838109602817
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 55.2660514302523
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.4637783572547
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.41377758357637
- type: mrr
value: 50.138451213818854
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.887846011166594
- type: cos_sim_spearman
value: 30.10823258355903
- type: dot_pearson
value: 12.888049550236385
- type: dot_spearman
value: 12.827495903098123
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.21
- type: map_at_10
value: 1.667
- type: map_at_100
value: 9.15
- type: map_at_1000
value: 22.927
- type: map_at_3
value: 0.573
- type: map_at_5
value: 0.915
- type: mrr_at_1
value: 80
- type: mrr_at_10
value: 87.167
- type: mrr_at_100
value: 87.167
- type: mrr_at_1000
value: 87.167
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 87.167
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 69.757
- type: ndcg_at_100
value: 52.402
- type: ndcg_at_1000
value: 47.737
- type: ndcg_at_3
value: 71.866
- type: ndcg_at_5
value: 72.225
- type: precision_at_1
value: 80
- type: precision_at_10
value: 75
- type: precision_at_100
value: 53.959999999999994
- type: precision_at_1000
value: 21.568
- type: precision_at_3
value: 76.667
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.21
- type: recall_at_10
value: 1.9189999999999998
- type: recall_at_100
value: 12.589
- type: recall_at_1000
value: 45.312000000000005
- type: recall_at_3
value: 0.61
- type: recall_at_5
value: 1.019
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 90.06
- type: precision
value: 89.17333333333333
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.06936416184971
- type: f1
value: 50.87508028259473
- type: precision
value: 48.97398843930635
- type: recall
value: 56.06936416184971
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.3170731707317
- type: f1
value: 52.96080139372822
- type: precision
value: 51.67861124382864
- type: recall
value: 57.3170731707317
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.67333333333333
- type: precision
value: 91.90833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97.07333333333332
- type: precision
value: 96.79500000000002
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.2
- type: precision
value: 92.48333333333333
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.9
- type: f1
value: 91.26666666666667
- type: precision
value: 90.59444444444445
- type: recall
value: 92.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 34.32835820895522
- type: f1
value: 29.074180380150533
- type: precision
value: 28.068207322920596
- type: recall
value: 34.32835820895522
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.5
- type: f1
value: 74.3945115995116
- type: precision
value: 72.82967843459222
- type: recall
value: 78.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34146341463415
- type: f1
value: 61.2469400518181
- type: precision
value: 59.63977756660683
- type: recall
value: 66.34146341463415
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.9
- type: f1
value: 76.90349206349207
- type: precision
value: 75.32921568627451
- type: recall
value: 80.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.93317132442284
- type: f1
value: 81.92519105034295
- type: precision
value: 80.71283920615635
- type: recall
value: 84.93317132442284
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.1304347826087
- type: f1
value: 65.22394755003451
- type: precision
value: 62.912422360248435
- type: recall
value: 71.1304347826087
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.82608695652173
- type: f1
value: 75.55693581780538
- type: precision
value: 73.79420289855072
- type: recall
value: 79.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74
- type: f1
value: 70.51022222222223
- type: precision
value: 69.29673599347512
- type: recall
value: 74
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 74.14238095238095
- type: precision
value: 72.27214285714285
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.97466827503016
- type: f1
value: 43.080330405420874
- type: precision
value: 41.36505499593557
- type: recall
value: 48.97466827503016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.60000000000001
- type: f1
value: 86.62333333333333
- type: precision
value: 85.225
- type: recall
value: 89.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.2
- type: f1
value: 39.5761253006253
- type: precision
value: 37.991358436312
- type: recall
value: 45.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.70333333333333
- type: precision
value: 85.53166666666667
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.095238095238095
- type: f1
value: 44.60650460650461
- type: precision
value: 42.774116796477045
- type: recall
value: 50.095238095238095
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.4
- type: f1
value: 58.35967261904762
- type: precision
value: 56.54857142857143
- type: recall
value: 63.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 87.075
- type: precision
value: 86.12095238095239
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.90333333333334
- type: precision
value: 95.50833333333333
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.9
- type: f1
value: 88.6288888888889
- type: precision
value: 87.61607142857142
- type: recall
value: 90.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.2
- type: f1
value: 60.54377630539395
- type: precision
value: 58.89434482711381
- type: recall
value: 65.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87
- type: f1
value: 84.32412698412699
- type: precision
value: 83.25527777777778
- type: recall
value: 87
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.7
- type: f1
value: 63.07883541295306
- type: precision
value: 61.06117424242426
- type: recall
value: 68.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.78333333333335
- type: precision
value: 90.86666666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 96.96666666666667
- type: precision
value: 96.61666666666667
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27493261455525
- type: f1
value: 85.90745732255168
- type: precision
value: 84.91389637616052
- type: recall
value: 88.27493261455525
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5982905982906
- type: f1
value: 88.4900284900285
- type: precision
value: 87.57122507122507
- type: recall
value: 90.5982905982906
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.90769841269842
- type: precision
value: 85.80178571428571
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.5
- type: f1
value: 78.36796536796538
- type: precision
value: 76.82196969696969
- type: recall
value: 82.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.48846960167715
- type: f1
value: 66.78771089148448
- type: precision
value: 64.98302885095339
- type: recall
value: 71.48846960167715
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.50333333333333
- type: precision
value: 91.77499999999999
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.20622568093385
- type: f1
value: 66.83278891450098
- type: precision
value: 65.35065777283677
- type: recall
value: 71.20622568093385
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.717948717948715
- type: f1
value: 43.53146853146853
- type: precision
value: 42.04721204721204
- type: recall
value: 48.717948717948715
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.5
- type: f1
value: 53.8564991863928
- type: precision
value: 52.40329436122275
- type: recall
value: 58.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.8
- type: f1
value: 88.29
- type: precision
value: 87.09166666666667
- type: recall
value: 90.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.28971962616822
- type: f1
value: 62.63425307817832
- type: precision
value: 60.98065939771546
- type: recall
value: 67.28971962616822
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 75.5264472455649
- type: precision
value: 74.38205086580086
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.7
- type: f1
value: 86.10809523809525
- type: precision
value: 85.07602564102565
- type: recall
value: 88.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.99999999999999
- type: f1
value: 52.85487521402737
- type: precision
value: 51.53985162713104
- type: recall
value: 56.99999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94
- type: f1
value: 92.45333333333333
- type: precision
value: 91.79166666666667
- type: recall
value: 94
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.61333333333333
- type: precision
value: 89.83333333333331
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34555555555555
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.2
- type: f1
value: 76.6563035113035
- type: precision
value: 75.3014652014652
- type: recall
value: 80.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.7
- type: f1
value: 82.78689263765207
- type: precision
value: 82.06705086580087
- type: recall
value: 84.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.33333333333333
- type: f1
value: 45.461523661523664
- type: precision
value: 43.93545574795575
- type: recall
value: 50.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.6000000000000005
- type: f1
value: 5.442121400446441
- type: precision
value: 5.146630385487529
- type: recall
value: 6.6000000000000005
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85
- type: f1
value: 81.04666666666667
- type: precision
value: 79.25
- type: recall
value: 85
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.32142857142857
- type: f1
value: 42.333333333333336
- type: precision
value: 40.69196428571429
- type: recall
value: 47.32142857142857
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 30.735455543358945
- type: f1
value: 26.73616790022338
- type: precision
value: 25.397823220451283
- type: recall
value: 30.735455543358945
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 25.1
- type: f1
value: 21.975989896371022
- type: precision
value: 21.059885632257203
- type: recall
value: 25.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.75666666666666
- type: precision
value: 92.06166666666665
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.74
- type: precision
value: 92.09166666666667
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.3
- type: f1
value: 66.922442002442
- type: precision
value: 65.38249567099568
- type: recall
value: 71.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 40.300000000000004
- type: f1
value: 35.78682789299971
- type: precision
value: 34.66425128716588
- type: recall
value: 40.300000000000004
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.82333333333334
- type: precision
value: 94.27833333333334
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 51.1
- type: f1
value: 47.179074753133584
- type: precision
value: 46.06461044702424
- type: recall
value: 51.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.7
- type: f1
value: 84.71
- type: precision
value: 83.46166666666667
- type: recall
value: 87.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.68333333333334
- type: precision
value: 94.13333333333334
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.39999999999999
- type: f1
value: 82.5577380952381
- type: precision
value: 81.36833333333334
- type: recall
value: 85.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.16788321167883
- type: f1
value: 16.948865627297987
- type: precision
value: 15.971932568647897
- type: recall
value: 21.16788321167883
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.9
- type: f1
value: 5.515526831658907
- type: precision
value: 5.141966366966367
- type: recall
value: 6.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39666666666668
- type: precision
value: 90.58666666666667
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 89.95666666666666
- type: precision
value: 88.92833333333333
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.76190476190477
- type: f1
value: 74.93386243386244
- type: precision
value: 73.11011904761904
- type: recall
value: 79.76190476190477
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.799999999999999
- type: f1
value: 6.921439712248537
- type: precision
value: 6.489885109680683
- type: recall
value: 8.799999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.75569358178054
- type: f1
value: 40.34699501312631
- type: precision
value: 38.57886764719063
- type: recall
value: 45.75569358178054
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 89.08333333333333
- type: precision
value: 88.01666666666668
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.06690476190477
- type: precision
value: 91.45095238095239
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 7.5
- type: f1
value: 6.200363129378736
- type: precision
value: 5.89115314822466
- type: recall
value: 7.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.59307359307358
- type: f1
value: 68.38933553219267
- type: precision
value: 66.62698412698413
- type: recall
value: 73.59307359307358
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.8473282442748
- type: f1
value: 64.72373682297346
- type: precision
value: 62.82834214131924
- type: recall
value: 69.8473282442748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5254730713246
- type: f1
value: 96.72489082969432
- type: precision
value: 96.33672974284326
- type: recall
value: 97.5254730713246
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.6
- type: f1
value: 72.42746031746033
- type: precision
value: 71.14036630036631
- type: recall
value: 75.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.24293785310734
- type: f1
value: 88.86064030131826
- type: precision
value: 87.73540489642184
- type: recall
value: 91.24293785310734
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.2
- type: f1
value: 4.383083659794954
- type: precision
value: 4.027861324289673
- type: recall
value: 6.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 84.09428571428572
- type: precision
value: 83.00333333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.699999999999996
- type: f1
value: 56.1584972394755
- type: precision
value: 54.713456330903135
- type: recall
value: 60.699999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.2
- type: f1
value: 80.66190476190475
- type: precision
value: 79.19690476190476
- type: recall
value: 84.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.33
- type: precision
value: 90.45
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.3
- type: f1
value: 5.126828976748276
- type: precision
value: 4.853614328966668
- type: recall
value: 6.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.76943699731903
- type: f1
value: 77.82873739308057
- type: precision
value: 76.27622452019234
- type: recall
value: 81.76943699731903
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.29666666666665
- type: precision
value: 89.40333333333334
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 29.249011857707508
- type: f1
value: 24.561866096392947
- type: precision
value: 23.356583740215456
- type: recall
value: 29.249011857707508
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.23943661971832
- type: precision
value: 71.66666666666667
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.35928143712575
- type: f1
value: 15.997867865075824
- type: precision
value: 14.882104658301346
- type: recall
value: 20.35928143712575
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 90.25999999999999
- type: precision
value: 89.45333333333335
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.15270935960591
- type: f1
value: 19.65673625772148
- type: precision
value: 18.793705293464992
- type: recall
value: 23.15270935960591
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.154929577464785
- type: f1
value: 52.3868463305083
- type: precision
value: 50.14938113529662
- type: recall
value: 59.154929577464785
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.51282051282051
- type: f1
value: 66.8089133089133
- type: precision
value: 65.37645687645687
- type: recall
value: 70.51282051282051
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93
- type: precision
value: 92.23333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.62212943632568
- type: f1
value: 34.3278276962583
- type: precision
value: 33.07646935732408
- type: recall
value: 38.62212943632568
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 28.1
- type: f1
value: 23.579609223054604
- type: precision
value: 22.39622774921555
- type: recall
value: 28.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27361563517914
- type: f1
value: 85.12486427795874
- type: precision
value: 83.71335504885994
- type: recall
value: 88.27361563517914
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.6
- type: f1
value: 86.39928571428571
- type: precision
value: 85.4947557997558
- type: recall
value: 88.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.77952380952381
- type: precision
value: 82.67602564102565
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.52755905511812
- type: f1
value: 75.3055868016498
- type: precision
value: 73.81889763779527
- type: recall
value: 79.52755905511812
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.76261904761905
- type: precision
value: 72.11670995670995
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 53.8781163434903
- type: f1
value: 47.25804051288816
- type: precision
value: 45.0603482390186
- type: recall
value: 53.8781163434903
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.10000000000001
- type: f1
value: 88.88
- type: precision
value: 87.96333333333334
- type: recall
value: 91.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.46153846153847
- type: f1
value: 34.43978243978244
- type: precision
value: 33.429487179487175
- type: recall
value: 38.46153846153847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.9
- type: f1
value: 86.19888888888887
- type: precision
value: 85.07440476190476
- type: recall
value: 88.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.9
- type: f1
value: 82.58857142857143
- type: precision
value: 81.15666666666667
- type: recall
value: 85.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 83.36999999999999
- type: precision
value: 81.86833333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.51415094339622
- type: f1
value: 63.195000099481234
- type: precision
value: 61.394033442972116
- type: recall
value: 68.51415094339622
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 86.14603174603175
- type: precision
value: 85.1162037037037
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.62043795620438
- type: f1
value: 94.40389294403892
- type: precision
value: 93.7956204379562
- type: recall
value: 95.62043795620438
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.8
- type: f1
value: 78.6532178932179
- type: precision
value: 77.46348795840176
- type: recall
value: 81.8
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.603
- type: map_at_10
value: 8.5
- type: map_at_100
value: 12.985
- type: map_at_1000
value: 14.466999999999999
- type: map_at_3
value: 4.859999999999999
- type: map_at_5
value: 5.817
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 42.331
- type: mrr_at_100
value: 43.592999999999996
- type: mrr_at_1000
value: 43.592999999999996
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 39.966
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 21.353
- type: ndcg_at_100
value: 31.087999999999997
- type: ndcg_at_1000
value: 43.163000000000004
- type: ndcg_at_3
value: 22.999
- type: ndcg_at_5
value: 21.451
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 19.387999999999998
- type: precision_at_100
value: 6.265
- type: precision_at_1000
value: 1.4160000000000001
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 21.224
- type: recall_at_1
value: 2.603
- type: recall_at_10
value: 14.474
- type: recall_at_100
value: 40.287
- type: recall_at_1000
value: 76.606
- type: recall_at_3
value: 5.978
- type: recall_at_5
value: 7.819
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.7848
- type: ap
value: 13.661023167088224
- type: f1
value: 53.61686134460943
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.28183361629882
- type: f1
value: 61.55481034919965
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 35.972128420092396
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59933241938367
- type: cos_sim_ap
value: 72.20760361208136
- type: cos_sim_f1
value: 66.4447731755424
- type: cos_sim_precision
value: 62.35539102267469
- type: cos_sim_recall
value: 71.10817941952506
- type: dot_accuracy
value: 78.98313166835548
- type: dot_ap
value: 44.492521645493795
- type: dot_f1
value: 45.814889336016094
- type: dot_precision
value: 37.02439024390244
- type: dot_recall
value: 60.07915567282321
- type: euclidean_accuracy
value: 85.3907134767837
- type: euclidean_ap
value: 71.53847289080343
- type: euclidean_f1
value: 65.95952206778834
- type: euclidean_precision
value: 61.31006346328196
- type: euclidean_recall
value: 71.37203166226914
- type: manhattan_accuracy
value: 85.40859510043511
- type: manhattan_ap
value: 71.49664104395515
- type: manhattan_f1
value: 65.98569969356485
- type: manhattan_precision
value: 63.928748144482924
- type: manhattan_recall
value: 68.17941952506597
- type: max_accuracy
value: 85.59933241938367
- type: max_ap
value: 72.20760361208136
- type: max_f1
value: 66.4447731755424
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.83261536073273
- type: cos_sim_ap
value: 85.48178133644264
- type: cos_sim_f1
value: 77.87816307403935
- type: cos_sim_precision
value: 75.88953021114926
- type: cos_sim_recall
value: 79.97382198952879
- type: dot_accuracy
value: 79.76287499514883
- type: dot_ap
value: 59.17438838475084
- type: dot_f1
value: 56.34566667855996
- type: dot_precision
value: 52.50349092359864
- type: dot_recall
value: 60.794579611949494
- type: euclidean_accuracy
value: 88.76857996662397
- type: euclidean_ap
value: 85.22764834359887
- type: euclidean_f1
value: 77.65379751543554
- type: euclidean_precision
value: 75.11152683839401
- type: euclidean_recall
value: 80.37419156144134
- type: manhattan_accuracy
value: 88.6987231730508
- type: manhattan_ap
value: 85.18907981724007
- type: manhattan_f1
value: 77.51967028849757
- type: manhattan_precision
value: 75.49992701795358
- type: manhattan_recall
value: 79.65044656606098
- type: max_accuracy
value: 88.83261536073273
- type: max_ap
value: 85.48178133644264
- type: max_f1
value: 77.87816307403935
---
## Multilingual-E5-base
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 12 layers and the embedding size is 768.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-base')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-base')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
**Initialization**: [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
**First stage**: contrastive pre-training with weak supervision
| Dataset | Weak supervision | # of text pairs |
|--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------|
| Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B |
| [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M |
| [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B |
| [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M |
| Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M |
| [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M |
| [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M |
| [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M |
| [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M |
**Second stage**: supervised fine-tuning
| Dataset | Language | # of text pairs |
|----------------------------------------------------------------------------------------|--------------|-----------------|
| [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k |
| [NQ](https://github.com/facebookresearch/DPR) | English | 70k |
| [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k |
| [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k |
| [ELI5](https://huggingface.co/datasets/eli5) | English | 500k |
| [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k |
| [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [SQuAD](https://huggingface.co/datasets/squad) | English | 87k |
| [Quora](https://huggingface.co/datasets/quora) | English | 150k |
| [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k |
| [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k |
For all labeled datasets, we only use its training set for fine-tuning.
For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672).
## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
| Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th |
|-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- |
| BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 |
| mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 |
| BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 |
| | |
| multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 |
| multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 |
| multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 |
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/multilingual-e5-base')
input_texts = [
'query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens.
| [
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Isotonic/deberta-v3-base-market | Isotonic | text-classification | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"en",
"dataset:0xMaka/trading-candles-subset-sc-format",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,689 | 1,692 | 11 | 3 | ---
datasets:
- 0xMaka/trading-candles-subset-sc-format
language:
- en
license: gpl
metrics:
- accuracy
- f1
widget:
- text: 'identify candle: 17284.58,17264.41,17284.58,17264.41'
example_title: Bear
- text: 'identify candle: open: 17343.43, close: 17625.18, high: 17804.68, low: 17322.15'
example_title: Bull
---
## Model description
deberta-v3-base-market is a text classification bot for binary classification of a trading candles opening and closing prices.
## Uses and limitations
This model can reliably return the bullish or bearish status of a candle given the opening, closing, high and low, in a format shown.
It will have trouble if the order of the numbers change (even if tags are included).
### How to use
You can use this model directly with a pipeline
```python
>>> from transformers import pipeline
>>> pipe = pipeline("text-classification", model="Isotonic/deberta-v3-base-market", num_labels=2, use_fast=False)
>>> text = "identify candle: open: 21788.19, close: 21900, high: 21965.23, low: 21788.19"
>>> pipe(text)
[{'label': 'Bullish', 'score': 0.9999979734420776}]
```
| [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] | Non_BioNLP |
Cohere/Cohere-embed-multilingual-light-v3.0 | Cohere | null | [
"transformers",
"mteb",
"model-index",
"endpoints_compatible",
"region:us"
] | 1,698 | 1,699 | 22 | 13 | ---
tags:
- mteb
model-index:
- name: embed-multilingual-light-v3.0
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 70.02985074626865
- type: ap
value: 33.228065779544146
- type: f1
value: 64.27173953207297
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.701225
- type: ap
value: 87.07178174251762
- type: f1
value: 90.69168484877625
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.550000000000004
- type: f1
value: 44.7233215588199
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 53.369
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.206988765030744
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 33.913737041277
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.544257541214925
- type: mrr
value: 72.07151651057468
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.79582115243736
- type: cos_sim_spearman
value: 84.01396250789998
- type: euclidean_pearson
value: 83.90766476102458
- type: euclidean_spearman
value: 84.01396250789998
- type: manhattan_pearson
value: 84.75071274784274
- type: manhattan_spearman
value: 85.02482891467078
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.12337662337663
- type: f1
value: 77.48610340227478
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.68268504601174
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.20870648143671
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 46.259
- type: ndcg_at_10
value: 44.555
- type: ndcg_at_10
value: 56.564
- type: ndcg_at_10
value: 36.162
- type: ndcg_at_10
value: 26.185000000000002
- type: ndcg_at_10
value: 41.547
- type: ndcg_at_10
value: 39.042
- type: ndcg_at_10
value: 38.086999999999996
- type: ndcg_at_10
value: 32.088
- type: ndcg_at_10
value: 27.006999999999998
- type: ndcg_at_10
value: 37.336999999999996
- type: ndcg_at_10
value: 38.011
- type: ndcg_at_10
value: 32.287
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 24.804000000000002
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 38.055
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.665
- type: f1
value: 40.77568559660878
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 85.52499999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 36.161
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 66.878
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.6372
- type: ap
value: 80.54846874011302
- type: f1
value: 85.61438421821343
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 40.487
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.8559051527588
- type: f1
value: 91.6271749996447
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 62.17738258093936
- type: f1
value: 45.80307070449218
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.42434431741762
- type: f1
value: 65.39580264698957
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.60928043039677
- type: f1
value: 72.30912915707411
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 35.17967476592229
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.993641089208683
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.362481813275295
- type: mrr
value: 32.43717742343303
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 32.123000000000005
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 55.51199999999999
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 87.847
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 49.4973643968247
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.2135284243427
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 17.1
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.7330191296952
- type: cos_sim_spearman
value: 77.03523134004043
- type: euclidean_pearson
value: 80.86067787185137
- type: euclidean_spearman
value: 77.03522959536473
- type: manhattan_pearson
value: 80.76089708603587
- type: manhattan_spearman
value: 76.86245377437302
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 80.46387812633851
- type: cos_sim_spearman
value: 73.21878234127571
- type: euclidean_pearson
value: 76.82160699895033
- type: euclidean_spearman
value: 73.21878234127571
- type: manhattan_pearson
value: 76.75657006349886
- type: manhattan_spearman
value: 73.19160258034827
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 79.06411399119807
- type: cos_sim_spearman
value: 79.49916779764082
- type: euclidean_pearson
value: 79.3356521660954
- type: euclidean_spearman
value: 79.49916779764082
- type: manhattan_pearson
value: 79.04971532119936
- type: manhattan_spearman
value: 79.16859911220654
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 80.6940934994372
- type: cos_sim_spearman
value: 76.9552055757283
- type: euclidean_pearson
value: 79.52818133592284
- type: euclidean_spearman
value: 76.9552055757283
- type: manhattan_pearson
value: 79.35220459438406
- type: manhattan_spearman
value: 76.85314462036561
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.58608774451231
- type: cos_sim_spearman
value: 86.42805701554927
- type: euclidean_pearson
value: 86.01117122595934
- type: euclidean_spearman
value: 86.42805701554927
- type: manhattan_pearson
value: 86.01345208923057
- type: manhattan_spearman
value: 86.43179450307953
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.18733039014667
- type: cos_sim_spearman
value: 84.3339529564109
- type: euclidean_pearson
value: 83.54530885349595
- type: euclidean_spearman
value: 84.3339529564109
- type: manhattan_pearson
value: 83.47015931913937
- type: manhattan_spearman
value: 84.22564786654777
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.88402211340522
- type: cos_sim_spearman
value: 88.6693290310468
- type: euclidean_pearson
value: 88.24947476618257
- type: euclidean_spearman
value: 88.6693290310468
- type: manhattan_pearson
value: 88.24496656367964
- type: manhattan_spearman
value: 88.52029848819545
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.96467575926597
- type: cos_sim_spearman
value: 65.30666900046252
- type: euclidean_pearson
value: 66.58031971340725
- type: euclidean_spearman
value: 65.30666900046252
- type: manhattan_pearson
value: 66.56530433327998
- type: manhattan_spearman
value: 65.42121899024113
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.31047656296519
- type: cos_sim_spearman
value: 85.46101092708824
- type: euclidean_pearson
value: 85.75896623084044
- type: euclidean_spearman
value: 85.46101092708824
- type: manhattan_pearson
value: 85.57323880630182
- type: manhattan_spearman
value: 85.23375523080594
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.89731978284804
- type: mrr
value: 94.28980424078465
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 67.95
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85643564356435
- type: cos_sim_ap
value: 96.59618618212247
- type: cos_sim_f1
value: 92.6221335992024
- type: cos_sim_precision
value: 92.34592445328032
- type: cos_sim_recall
value: 92.9
- type: dot_accuracy
value: 99.85643564356435
- type: dot_ap
value: 96.5961861821225
- type: dot_f1
value: 92.6221335992024
- type: dot_precision
value: 92.34592445328032
- type: dot_recall
value: 92.9
- type: euclidean_accuracy
value: 99.85643564356435
- type: euclidean_ap
value: 96.5961861821225
- type: euclidean_f1
value: 92.6221335992024
- type: euclidean_precision
value: 92.34592445328032
- type: euclidean_recall
value: 92.9
- type: manhattan_accuracy
value: 99.85841584158416
- type: manhattan_ap
value: 96.5578240948512
- type: manhattan_f1
value: 92.71523178807946
- type: manhattan_precision
value: 94.4963655244029
- type: manhattan_recall
value: 91.0
- type: max_accuracy
value: 99.85841584158416
- type: max_ap
value: 96.5961861821225
- type: max_f1
value: 92.71523178807946
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 60.84750068050385
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.96844721192451
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.454280909595205
- type: mrr
value: 51.24249320940497
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.998438678552517
- type: cos_sim_spearman
value: 30.409482543506876
- type: dot_pearson
value: 29.998443850173224
- type: dot_spearman
value: 30.409482543506876
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 78.93
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: ndcg_at_10
value: 29.482999999999997
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.65859999999999
- type: ap
value: 15.03693738050973
- type: f1
value: 54.94379403846167
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.4567062818336
- type: f1
value: 64.48980729427107
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 42.08554991843959
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.75293556654945
- type: cos_sim_ap
value: 69.40551043272129
- type: cos_sim_f1
value: 65.56335231034026
- type: cos_sim_precision
value: 65.79856497475419
- type: cos_sim_recall
value: 65.32981530343008
- type: dot_accuracy
value: 84.75293556654945
- type: dot_ap
value: 69.40550704470631
- type: dot_f1
value: 65.56335231034026
- type: dot_precision
value: 65.79856497475419
- type: dot_recall
value: 65.32981530343008
- type: euclidean_accuracy
value: 84.75293556654945
- type: euclidean_ap
value: 69.4055136381454
- type: euclidean_f1
value: 65.56335231034026
- type: euclidean_precision
value: 65.79856497475419
- type: euclidean_recall
value: 65.32981530343008
- type: manhattan_accuracy
value: 84.6337247422066
- type: manhattan_ap
value: 69.13628354134198
- type: manhattan_f1
value: 65.46998180715585
- type: manhattan_precision
value: 60.58361391694726
- type: manhattan_recall
value: 71.21372031662268
- type: max_accuracy
value: 84.75293556654945
- type: max_ap
value: 69.4055136381454
- type: max_f1
value: 65.56335231034026
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.04800714091667
- type: cos_sim_ap
value: 85.84596325009252
- type: cos_sim_f1
value: 78.39228527221042
- type: cos_sim_precision
value: 73.58643518205768
- type: cos_sim_recall
value: 83.86972590083154
- type: dot_accuracy
value: 89.04800714091667
- type: dot_ap
value: 85.8459646697087
- type: dot_f1
value: 78.39228527221042
- type: dot_precision
value: 73.58643518205768
- type: dot_recall
value: 83.86972590083154
- type: euclidean_accuracy
value: 89.04800714091667
- type: euclidean_ap
value: 85.84596376376919
- type: euclidean_f1
value: 78.39228527221042
- type: euclidean_precision
value: 73.58643518205768
- type: euclidean_recall
value: 83.86972590083154
- type: manhattan_accuracy
value: 89.0266620095471
- type: manhattan_ap
value: 85.80124417850608
- type: manhattan_f1
value: 78.37817859254879
- type: manhattan_precision
value: 75.36963321012226
- type: manhattan_recall
value: 81.63689559593472
- type: max_accuracy
value: 89.04800714091667
- type: max_ap
value: 85.8459646697087
- type: max_f1
value: 78.39228527221042
---
# Cohere embed-multilingual-light-v3.0
This repository contains the tokenizer for the Cohere `embed-multilingual-light-v3.0` model. See our blogpost [Cohere Embed V3](https://txt.cohere.com/introducing-embed-v3/) for more details on this model.
You can use the embedding model either via the Cohere API, AWS SageMaker or in your private deployments.
## Usage Cohere API
The following code snippet shows the usage of the Cohere API. Install the cohere SDK via:
```
pip install -U cohere
```
Get your free API key on: www.cohere.com
```python
# This snippet shows and example how to use the Cohere Embed V3 models for semantic search.
# Make sure to have the Cohere SDK in at least v4.30 install: pip install -U cohere
# Get your API key from: www.cohere.com
import cohere
import numpy as np
cohere_key = "{YOUR_COHERE_API_KEY}" #Get your API key from www.cohere.com
co = cohere.Client(cohere_key)
docs = ["The capital of France is Paris",
"PyTorch is a machine learning framework based on the Torch library.",
"The average cat lifespan is between 13-17 years"]
#Encode your documents with input type 'search_document'
doc_emb = co.embed(docs, input_type="search_document", model="embed-multilingual-light-v3.0").embeddings
doc_emb = np.asarray(doc_emb)
#Encode your query with input type 'search_query'
query = "What is Pytorch"
query_emb = co.embed([query], input_type="search_query", model="embed-multilingual-light-v3.0").embeddings
query_emb = np.asarray(query_emb)
query_emb.shape
#Compute the dot product between query embedding and document embedding
scores = np.dot(query_emb, doc_emb.T)[0]
#Find the highest scores
max_idx = np.argsort(-scores)
print(f"Query: {query}")
for idx in max_idx:
print(f"Score: {scores[idx]:.2f}")
print(docs[idx])
print("--------")
```
## Usage AWS SageMaker
The embedding model can be privately deployed in your AWS Cloud using our [AWS SageMaker marketplace offering](https://aws.amazon.com/marketplace/pp/prodview-z6huxszcqc25i). It runs privately in your VPC, with latencies as low as 5ms for query encoding.
## Usage AWS Bedrock
Soon the model will also be available via AWS Bedrock. Stay tuned
## Private Deployment
You want to run the model on your own hardware? [Contact Sales](https://cohere.com/contact-sales) to learn more.
## Supported Languages
This model was trained on nearly 1B English training pairs and nearly 0.5B Non-English training pairs from 100+ languages.
Evaluation results can be found in the [Embed V3.0 Benchmark Results spreadsheet](https://docs.google.com/spreadsheets/d/1w7gnHWMDBdEUrmHgSfDnGHJgVQE5aOiXCCwO3uNH_mI/edit?usp=sharing). | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
lucyknada/PocketDoc_Dans-PersonalityEngine-V1.1.0-12b-exl2 | lucyknada | text-generation | [
"transformers",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"text-generation",
"en",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PocketDoc/Dans-Mathmaxx-Numina-CoT",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Benchmaxx",
"dataset:PocketDoc/Dans-Benchmaxx-COT",
"dataset:PocketDoc/Dans-Codemaxx-LeetCode",
"dataset:PocketDoc/Dans-Codemaxx-CodeFeedback-Conversations",
"dataset:PocketDoc/Dans-Codemaxx-CodeFeedback-SingleTurn",
"dataset:PocketDoc/Dans-Codemaxx-Bigcode-SelfInstruct",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-apigen",
"dataset:PocketDoc/Dans-ASCIIMaxx-Wordart",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-M",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Prosemaxx-Gryphe-GPT4o-WritingPrompts",
"dataset:PocketDoc/Dans-Assistantmaxx-Sharegpt",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-NoRobots",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-EvolKit",
"dataset:PocketDoc/Dans-Assistantmaxx-Camel-GPT4",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Logicmaxx-Skunkworks",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PocketDoc/Dans-Logicmaxx-Magpie-Ultra",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Personamaxx",
"dataset:PocketDoc/Dans-Personamaxx-Rainy",
"dataset:PocketDoc/Dans-Personamaxx-Aesir",
"dataset:PocketDoc/Dans-Kinomaxx-VanillaBackrooms",
"base_model:mistralai/Mistral-Nemo-Base-2407",
"base_model:finetune:mistralai/Mistral-Nemo-Base-2407",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,734 | 1,734 | 4 | 1 | ---
base_model:
- mistralai/Mistral-Nemo-Base-2407
datasets:
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/Energetic-Materials-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PocketDoc/Dans-Mathmaxx-Numina-CoT
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Benchmaxx
- PocketDoc/Dans-Benchmaxx-COT
- PocketDoc/Dans-Codemaxx-LeetCode
- PocketDoc/Dans-Codemaxx-CodeFeedback-Conversations
- PocketDoc/Dans-Codemaxx-CodeFeedback-SingleTurn
- PocketDoc/Dans-Codemaxx-Bigcode-SelfInstruct
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-Toolmaxx-Functions-apigen
- PocketDoc/Dans-ASCIIMaxx-Wordart
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-M
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Prosemaxx-Gryphe-GPT4o-WritingPrompts
- PocketDoc/Dans-Assistantmaxx-Sharegpt
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-NoRobots
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-EvolKit
- PocketDoc/Dans-Assistantmaxx-Camel-GPT4
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Logicmaxx-Skunkworks
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PocketDoc/Dans-Logicmaxx-Magpie-Ultra
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Personamaxx
- PocketDoc/Dans-Personamaxx-Rainy
- PocketDoc/Dans-Personamaxx-Aesir
- PocketDoc/Dans-Kinomaxx-VanillaBackrooms
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
model-index:
- name: Dans-PersonalityEngine-V1.1.0-12b
results: []
---
### exl2 quant (measurement.json in main branch)
---
### check revisions for quants
---
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<div class="crt-container">
<div class="crt-case">
<div class="crt-inner-case">
<div class="crt-bezel">
<div class="terminal-screen">
<h2>Dans-PersonalityEngine-V1.1.0-12b</h2>
<p>This model series is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline. It has been trained on a wide array of one shot instructions, multi turn instructions, tool use, role playing scenarios, text adventure games, co-writing, and much more.</p>
<h3>Key Details</h3>
<pre class="code-block">
BASE MODEL: mistralai/Mistral-Nemo-Base-2407
LICENSE: apache-2.0
LANGUAGE: English
CONTEXT LENGTH: 32768 tokens</pre>
<h3>Recommended Settings</h3>
<pre class="code-block">
TEMPERATURE: 1.0
TOP_P: 0.95
MIN_P: 0.05</pre>
<h3>Prompting Format</h3>
<p>The model uses standard "ChatML" format:</p>
<pre class="code-block">
<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|></pre>
<h3>SillyTavern Templates</h3>
<details>
<summary>Context Template</summary>
<pre class="code-block">
{
"story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
"example_separator": "",
"chat_start": "",
"use_stop_strings": false,
"allow_jailbreak": false,
"always_force_name2": false,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Dan-ChatML"
}</pre>
</details>
<details>
<summary>Instruct Template</summary>
<pre class="code-block">
{
"system_prompt": "Write {{char}}'s actions and dialogue, user will write {{user}}'s.",
"input_sequence": "<|im_start|>user\n",
"output_sequence": "<|im_start|>assistant\n",
"first_output_sequence": "",
"last_output_sequence": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"stop_sequence": "<|im_end|>",
"wrap": false,
"macro": true,
"names": false,
"names_force_groups": false,
"activation_regex": "",
"skip_examples": false,
"output_suffix": "<|im_end|>\n",
"input_suffix": "<|im_end|>\n",
"system_sequence": "<|im_start|>system\n",
"system_suffix": "<|im_end|>\n",
"user_alignment_message": "",
"last_system_sequence": "",
"system_same_as_user": false,
"first_input_sequence": "",
"last_input_sequence": "",
"name": "Dan-ChatML"
}</pre>
</details>
<h3>Example Outputs</h3>
<details>
<summary>General Assistance Example</summary>
<p><em>System: Act as a personable virtual assistant.</em></p>
<p><strong>User:</strong> I'm going to be in Chicago soon can you help me work up some restaurant ideas?</p>
<p><strong>Assistant:</strong> I'd love to! Here are a few suggestions for restaurants in Chicago:</p>
<ul>
<li>For deep dish pizza, I highly recommend Lou Malnati's Pizzeria or Giordano's. Both are iconic Chicago institutions and serve some of the best deep dish pizza in the city.</li>
<li>If you're looking for a classic steakhouse experience, Gibson's Bar & Steakhouse is a great choice. They offer high-quality steaks, seafood, and an extensive wine list in a sophisticated atmosphere.</li>
<li>For a more casual dining experience, check out Portillo's Hot Dogs. They're known for their Chicago-style hot dogs, Italian beef sandwiches, and other classic American fare.</li>
<li>For something a bit more unique, try Alinea. It's a high-end, avant-garde restaurant that offers a truly one-of-a-kind dining experience with innovative cuisine and cutting-edge presentation techniques.</li>
</ul>
<p>Let me know if you need any other recommendations or have specific dietary restrictions or preferences!</p>
</details>
<details>
<summary>Basic Programming Example</summary>
<p><em>System: Answer the question or perform the task.</em></p>
<p><strong>User:</strong> Explain the bug in the following code:</p>
<pre class="code-block">
from time import sleep
from multiprocessing.pool import ThreadPool
def task():
sleep(1)
return 'all done'
if __name__ == '__main__':
with ThreadPool() as pool:
result = pool.apply_async(task())</pre>
<p><strong>Assistant:</strong> The bug in this code is that the `task()` function is called immediately when the `apply_async()` method is invoked, not when the worker process is ready to execute it.</p>
<p>Specifically, the line `result = pool.apply_async(task())` calls `task()` right away and passes its return value to `apply_async()`. The worker process will execute whatever `task()` returns at that moment, which is `'all done'` since `task()` completes execution almost immediately.</p>
<p>To fix this bug, you need to pass the `task` function itself to `apply_async()`, rather than calling `task()` beforehand. Here's the corrected code:</p>
<pre class="code-block">
from time import sleep
from multiprocessing.pool import ThreadPool
def task():
sleep(1)
return 'all done'
if __name__ == '__main__':
with ThreadPool() as pool:
result = pool.apply_async(task) # Pass the function, not the result
value = result.get() # Wait for the result
print(value)</pre>
</details>
<details>
<summary>Writing Example</summary>
<p><em>System: Take the users messages and use them as a writing prompt. Write 5 sentences.</em></p>
<p><strong>User:</strong> Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.</p>
<p><strong>Assistant:</strong> "Sir, they are gone sir." The young corporal was out of breath.</p>
<p>The soldier looked up. His grey eyes had been locked on his holofac display for hours now. He was in his late thirties and wore the rank of colonel. He was tall, with a well-muscled but slightly over weight body.</p>
<p><strong>User:</strong> "Who?"</p>
<p><strong>Assistant:</strong> He turned and looked at the corporal.</p>
<p>"The bears. They left. We checked the entire perimeter. No bears to be found sir."</p>
<p>"Did you check the air?"</p>
<p>"Not the air, sir. We didn't see any shuttles. But we have drones out now."</p>
<p><strong>User:</strong> "*Sigh*"</p>
<p><strong>Assistant:</strong> "Ok, get me the status report. I need to know how many dead we have."</p>
<p>"Yes sir." The corporal turned to leave.</p>
<p>"Corporal…"</p>
<p>"Sir?"</p>
<p>"Get me a fresh cup of coffee. Black. I need the caffeine."</p>
</details>
<details>
<summary>Text Editing Example</summary>
<p><em>System: You are a professional editor with decades of experience, help the user with any task they have for you.</em></p>
<p><strong>User:</strong> Can you rewrite this to flow better? "I knew I probably shouldnt have done that but ohhh well i guess i just going to live with it since whats done is done"</p>
<p><strong>Assistant:</strong> I knew I probably shouldn't have done that, but oh well. I guess I just going to live with it since what's done is done.</p>
</details>
<h3>Datasets</h3>
<details>
<summary>Breakdown</summary>
<ul>
<li>Memory & Core Curriculum
<ul>
<li>Dans-MemoryCore-CoreCurriculum-Small - Base knowledge</li>
</ul>
</li>
<li>Military & Survival Knowledge
<ul>
<li>Energetic-Materials - Understanding of explosives and related chemistry</li>
<li>Chemical-Biological-Safety-Applications - Safety protocols, handling procedures, etc.</li>
<li>US-Army-Survival - Survival techniques and field craft</li>
<li>Resistance - Resistance operations and tactics</li>
<li>Interrogation - Interview and interrogation techniques</li>
<li>Multi-Environment-Operations - Operations across different environments</li>
</ul>
</li>
<li>Mathematics & Problem Solving
<ul>
<li>Dans-Mathmaxx - Core mathematics capabilities</li>
<li>Dans-Mathmaxx-Numina-CoT - Chain of thought mathematical reasoning</li>
<li>Math-Multiturn-1K-ShareGPT - Multi-turn math problem solving</li>
</ul>
</li>
<li>Benchmarking & Testing
<ul>
<li>Dans-Benchmaxx - Prepares model for "answer only" style benchmarks. Helps prevent the model from talking too much when the situation calls for it.</li>
<li>Dans-Benchmaxx-COT - The same but for COT then answer based testing.</li>
</ul>
</li>
<li>Programming & Code
<ul>
<li>Dans-Codemaxx-LeetCode - Programmatically produced from rosettacode</li>
<li>Dans-Codemaxx-CodeFeedback - Dataset focused on correction after producing incorrect code.</li>
<li>Dans-Codemaxx-Bigcode-SelfInstruct - Subset from the Bigcode SelfInstruct dataset</li>
</ul>
</li>
<li>Task Specific Training
<ul>
<li>Dans-Taskmaxx - General task handling</li>
<li>Dans-Taskmaxx-DataPrepper - Data preparation and cleaning</li>
<li>Dans-Taskmaxx-ConcurrentQA - Multi hop retrieval based tasks</li>
<li>Dans-Taskmaxx-TableGPT - Table data processing</li>
<li>Dans-Taskmaxx-SciRIFF - Scientific paper analysis</li>
<li>Dans-Taskmaxx-Edit - Text editing and revision</li>
</ul>
</li>
<li>System & Tool Usage
<ul>
<li>Dans-Toolmaxx-Agent - Tool usage and agent behavior</li>
<li>Dans-Toolmaxx-ShellCommands - Command line operations</li>
<li>Dans-Toolmaxx-Functions - API and function calling</li>
</ul>
</li>
<li>Creative & Writing
<ul>
<li>Dans-ASCIIMaxx-Wordart - ASCII word art creation</li>
<li>Dans-Prosemaxx-Gutenberg - Summary style prompt writing instructions sourced from the Gutenberg project.</li>
<li>Dans-Prosemaxx-Cowriter - Back and forth co-writing dataset sourced from human written literature</li>
<li>Dans-Prosemaxx-Adventure - Interactive fiction throwbacks such as Zork, Anchorhead, and Hunting the Ripper</li>
<li>Dans-Prosemaxx-WritingPrompts - Prompt based writing instructions</li>
</ul>
</li>
<li>Assistant & Personality
<ul>
<li>Dans-Assistantmaxx series - Various assistant behaviors and capabilities</li>
<li>Dans-Personamaxx series - Personality and character development</li>
<li>Dans-Logicmaxx series - Logical reasoning and problem solving</li>
</ul>
</li>
<li>Instruction Following
<ul>
<li>Dans-Systemmaxx - System message training data optimized to help the model reject bad patterns</li>
</ul>
</li>
</ul>
</details>
<h3>Training</h3>
<p>Full finetuned for 2 epochs on 1x H200 SXM (88 hours of training)</p>
<p class="badge-container">
<a href="https://github.com/OpenAccess-AI-Collective/axolotl" target="_blank" rel="noopener noreferrer">
<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>
</a>
</p>
<h3>Support Development</h3>
<p>Development is limited by funding and resources. To help support:</p>
<p>- Contact on HF</p>
<p>- Email: [email protected]</p>
<p class="coffee-container">
<a href="https://www.buymeacoffee.com/visually" target="_blank" rel="noopener noreferrer">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="45" width="162">
</a>
</p>
</div>
</div>
</div>
</div>
</div>
<style>
@import url('https://fonts.googleapis.com/css2?family=VT323&display=swap');
.crt-container {
padding: 10px;
max-width: 1000px;
margin: 0 auto;
width: 95%;
}
.crt-case {
background: #e8d7c3;
border-radius: 10px;
padding: 15px;
box-shadow: inset -2px -2px 5px rgba(0,0,0,0.3), 2px 2px 5px rgba(0,0,0,0.2);
}
.crt-inner-case {
background: #e8d7c3;
border-radius: 8px;
padding: 3px;
box-shadow: inset -1px -1px 4px rgba(0,0,0,0.3), 1px 1px 4px rgba(0,0,0,0.2);
}
.crt-bezel {
background: linear-gradient(145deg, #1a1a1a, #2a2a2a);
padding: 15px;
border-radius: 5px;
border: 3px solid #0a0a0a;
position: relative;
box-shadow:
inset 0 0 20px rgba(0,0,0,0.5),
inset 0 0 4px rgba(0,0,0,0.4),
inset 2px 2px 4px rgba(255,255,255,0.05),
inset -2px -2px 4px rgba(0,0,0,0.8),
0 0 2px rgba(0,0,0,0.6),
-1px -1px 4px rgba(255,255,255,0.1),
1px 1px 4px rgba(0,0,0,0.3);
}
.crt-bezel::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(255,255,255,0.03) 0%,
rgba(255,255,255,0) 40%,
rgba(0,0,0,0.1) 60%,
rgba(0,0,0,0.2) 100%);
border-radius: 3px;
pointer-events: none;
}
.terminal-screen {
background: #111112;
padding: 20px;
border-radius: 15px;
position: relative;
overflow: hidden;
font-family: 'VT323', monospace;
font-size: clamp(12px, 1.5vw, 16px);
color: #e49b3e;
line-height: 1.4;
text-shadow: 0 0 2px #e49b3e;
animation: flicker 0.15s infinite;
filter: brightness(1.1) contrast(1.1);
box-shadow:
inset 0 0 30px rgba(0,0,0,0.9),
inset 0 0 8px rgba(0,0,0,0.8),
0 0 5px rgba(0,0,0,0.6);
max-width: 80ch;
margin: 0 auto;
}
.terminal-screen h2, .terminal-screen h3 {
font-size: clamp(16px, 2vw, 20px);
margin-bottom: 1em;
color: #e49b3e;
}
.terminal-screen pre.code-block {
font-size: clamp(11px, 1.3vw, 14px);
white-space: pre-wrap;
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
}
.terminal-screen::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(rgba(18, 16, 16, 0) 50%, rgba(0, 0, 0, 0.25) 50%), url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAGFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4o8JoAAAAB3RSTlMAGwQIEQMYADcPzwAAACJJREFUKM9jYBgFo2AU0Beg+A8YMCLxGYZCbNQEo4BaAAD5TQiR5wU9vAAAAABJRU5ErkJggg==');
background-size: 100% 2.5px;
animation: scan 1s linear infinite;
pointer-events: none;
z-index: 2;
}
.terminal-screen::after {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: radial-gradient(circle at center,
rgba(17, 17, 18, 0) 0%,
rgba(17, 17, 18, 0.2) 50%,
rgba(17, 17, 18, 0.15) 100%
);
border-radius: 20px;
animation: vignette-pulse 3s infinite;
pointer-events: none;
z-index: 1;
}
.terminal-screen details {
margin: 1em 0;
padding: 0.5em;
border: 1px solid #e49b3e;
border-radius: 4px;
}
.terminal-screen summary {
cursor: pointer;
font-weight: bold;
margin: -0.5em;
padding: 0.5em;
border-bottom: 1px solid #e49b3e;
color: #e49b3e;
}
.terminal-screen details[open] summary {
margin-bottom: 0.5em;
}
.badge-container, .coffee-container {
text-align: center;
margin: 1em 0;
}
.badge-container img, .coffee-container img {
max-width: 100%;
height: auto;
}
.terminal-screen a {
color: #e49b3e;
text-decoration: underline;
transition: opacity 0.2s;
}
.terminal-screen a:hover {
opacity: 0.8;
}
.terminal-screen strong, .terminal-screen em {
color: #f0f0f0; /* off-white color for user/system messages */
}
.terminal-screen p {
color: #f0f0f0; /* off-white color for assistant responses */
}
.terminal-screen p, .terminal-screen li {
color: #e49b3e;
}
.terminal-screen code,
.terminal-screen kbd,
.terminal-screen samp {
color: #e49b3e;
font-family: 'VT323', monospace;
text-shadow: 0 0 2px #e49b3e;
background-color: #1a1a1a;
padding: 0.2em 0.4em;
border-radius: 4px;
}
.terminal-screen pre.code-block,
.terminal-screen pre {
font-size: clamp(11px, 1.3vw, 14px);
white-space: pre-wrap;
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
}
@keyframes flicker {
0% { opacity: 0.98; }
50% { opacity: 1; }
100% { opacity: 0.99; }
}
@keyframes scan {
0% { transform: translateY(0); }
100% { transform: translateY(4px); }
}
@keyframes vignette-pulse {
0% { opacity: 0.8; }
50% { opacity: 1; }
100% { opacity: 0.8; }
}
</style> | [
"SUMMARIZATION"
] | [
"CRAFT"
] | Non_BioNLP |
Omartificial-Intelligence-Space/Arabic-all-nli-triplet-Matryoshka | Omartificial-Intelligence-Space | sentence-similarity | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"mteb",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"arxiv:2407.21139",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"region:us"
] | 1,718 | 1,737 | 217 | 2 | ---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
language:
- ar
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
- mteb
inference: false
widget:
- source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط
النظيفة
sentences:
- رجل يقدم عرضاً
- هناك رجل بالخارج قرب الشاطئ
- رجل يجلس على أريكه
- source_sentence: رجل يقفز إلى سريره القذر
sentences:
- السرير قذر.
- رجل يضحك أثناء غسيل الملابس
- الرجل على القمر
- source_sentence: الفتيات بالخارج
sentences:
- امرأة تلف الخيط إلى كرات بجانب كومة من الكرات
- فتيان يركبان في جولة متعة
- ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة تتحدث
إليهن
- source_sentence: الرجل يرتدي قميصاً أزرق.
sentences:
- رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة حمراء
مع الماء في الخلفية.
- كتاب القصص مفتوح
- رجل يرتدي قميص أسود يعزف على الجيتار.
- source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة
شابة.
sentences:
- ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه
- رجل يستلقي على وجهه على مقعد في الحديقة.
- الشاب نائم بينما الأم تقود ابنتها إلى الحديقة
model-index:
- name: SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrieval (ar)
type: miracl/mmteb-miracl
config: ar
split: dev
revision: main
metrics:
- type: ndcg_at_1
value: 19.233
- type: ndcg_at_3
value: 21.393
- type: ndcg_at_5
value: 23.347
- type: ndcg_at_10
value: 26.273999999999997
- type: ndcg_at_20
value: 28.591
- type: ndcg_at_100
value: 32.098
- type: ndcg_at_1000
value: 34.971000000000004
- type: map_at_1
value: 12.555
- type: map_at_3
value: 17.763
- type: map_at_5
value: 19.317
- type: map_at_10
value: 20.748
- type: map_at_20
value: 21.535
- type: map_at_100
value: 22.147
- type: map_at_1000
value: 22.275
- type: recall_at_1
value: 12.555
- type: recall_at_3
value: 22.576
- type: recall_at_5
value: 27.681
- type: recall_at_10
value: 35.461
- type: recall_at_20
value: 43.097
- type: recall_at_100
value: 58.902
- type: recall_at_1000
value: 78.33099999999999
- type: precision_at_1
value: 19.233
- type: precision_at_3
value: 12.65
- type: precision_at_5
value: 9.626999999999999
- type: precision_at_10
value: 6.35
- type: precision_at_20
value: 3.961
- type: precision_at_100
value: 1.118
- type: precision_at_1000
value: 0.152
- type: mrr_at_1
value: 19.2334
- type: mrr_at_3
value: 25.1266
- type: mrr_at_5
value: 26.4681
- type: mrr_at_10
value: 27.6315
- type: mrr_at_20
value: 28.1315
- type: mrr_at_100
value: 28.4874
- type: mrr_at_1000
value: 28.5524
- type: nauc_ndcg_at_1_max
value: 12.8914
- type: nauc_ndcg_at_1_std
value: 10.4594
- type: nauc_ndcg_at_1_diff1
value: 23.8138
- type: nauc_ndcg_at_3_max
value: 12.3382
- type: nauc_ndcg_at_3_std
value: 11.5929
- type: nauc_ndcg_at_3_diff1
value: 19.1347
- type: nauc_ndcg_at_5_max
value: 14.0129
- type: nauc_ndcg_at_5_std
value: 13.6398
- type: nauc_ndcg_at_5_diff1
value: 19.8536
- type: nauc_ndcg_at_10_max
value: 14.538300000000001
- type: nauc_ndcg_at_10_std
value: 15.933800000000002
- type: nauc_ndcg_at_10_diff1
value: 19.7082
- type: nauc_ndcg_at_20_max
value: 15.3478
- type: nauc_ndcg_at_20_std
value: 18.4803
- type: nauc_ndcg_at_20_diff1
value: 18.8725
- type: nauc_ndcg_at_100_max
value: 16.2684
- type: nauc_ndcg_at_100_std
value: 21.147199999999998
- type: nauc_ndcg_at_100_diff1
value: 19.0854
- type: nauc_ndcg_at_1000_max
value: 16.6485
- type: nauc_ndcg_at_1000_std
value: 21.2042
- type: nauc_ndcg_at_1000_diff1
value: 19.411
- type: nauc_map_at_1_max
value: 8.571299999999999
- type: nauc_map_at_1_std
value: 5.2620000000000005
- type: nauc_map_at_1_diff1
value: 25.1772
- type: nauc_map_at_3_max
value: 10.5142
- type: nauc_map_at_3_std
value: 8.8853
- type: nauc_map_at_3_diff1
value: 19.9708
- type: nauc_map_at_5_max
value: 12.2728
- type: nauc_map_at_5_std
value: 10.8387
- type: nauc_map_at_5_diff1
value: 20.2731
- type: nauc_map_at_10_max
value: 12.909899999999999
- type: nauc_map_at_10_std
value: 12.4311
- type: nauc_map_at_10_diff1
value: 20.079900000000002
- type: nauc_map_at_20_max
value: 13.367399999999998
- type: nauc_map_at_20_std
value: 13.5572
- type: nauc_map_at_20_diff1
value: 19.775000000000002
- type: nauc_map_at_100_max
value: 13.716600000000001
- type: nauc_map_at_100_std
value: 14.234
- type: nauc_map_at_100_diff1
value: 19.831
- type: nauc_map_at_1000_max
value: 13.736400000000001
- type: nauc_map_at_1000_std
value: 14.265600000000001
- type: nauc_map_at_1000_diff1
value: 19.8517
- type: nauc_recall_at_1_max
value: 8.571299999999999
- type: nauc_recall_at_1_std
value: 5.2620000000000005
- type: nauc_recall_at_1_diff1
value: 25.1772
- type: nauc_recall_at_3_max
value: 10.1169
- type: nauc_recall_at_3_std
value: 10.1543
- type: nauc_recall_at_3_diff1
value: 16.4652
- type: nauc_recall_at_5_max
value: 13.6919
- type: nauc_recall_at_5_std
value: 14.410400000000001
- type: nauc_recall_at_5_diff1
value: 17.0477
- type: nauc_recall_at_10_max
value: 13.8916
- type: nauc_recall_at_10_std
value: 18.4174
- type: nauc_recall_at_10_diff1
value: 16.3955
- type: nauc_recall_at_20_max
value: 15.0336
- type: nauc_recall_at_20_std
value: 24.3934
- type: nauc_recall_at_20_diff1
value: 13.834299999999999
- type: nauc_recall_at_100_max
value: 16.988
- type: nauc_recall_at_100_std
value: 34.8989
- type: nauc_recall_at_100_diff1
value: 14.1371
- type: nauc_recall_at_1000_max
value: 22.006700000000002
- type: nauc_recall_at_1000_std
value: 43.2671
- type: nauc_recall_at_1000_diff1
value: 15.6926
- type: nauc_precision_at_1_max
value: 12.8914
- type: nauc_precision_at_1_std
value: 10.4594
- type: nauc_precision_at_1_diff1
value: 23.8138
- type: nauc_precision_at_3_max
value: 17.4418
- type: nauc_precision_at_3_std
value: 18.2472
- type: nauc_precision_at_3_diff1
value: 14.380299999999998
- type: nauc_precision_at_5_max
value: 21.7353
- type: nauc_precision_at_5_std
value: 22.7454
- type: nauc_precision_at_5_diff1
value: 14.671999999999999
- type: nauc_precision_at_10_max
value: 22.4616
- type: nauc_precision_at_10_std
value: 27.271099999999997
- type: nauc_precision_at_10_diff1
value: 13.025
- type: nauc_precision_at_20_max
value: 23.610400000000002
- type: nauc_precision_at_20_std
value: 32.0969
- type: nauc_precision_at_20_diff1
value: 9.5973
- type: nauc_precision_at_100_max
value: 24.1842
- type: nauc_precision_at_100_std
value: 35.335
- type: nauc_precision_at_100_diff1
value: 7.833900000000001
- type: nauc_precision_at_1000_max
value: 21.5183
- type: nauc_precision_at_1000_std
value: 30.4104
- type: nauc_precision_at_1000_diff1
value: 4.7376000000000005
- type: nauc_mrr_at_1_max
value: 12.8914
- type: nauc_mrr_at_1_std
value: 10.4594
- type: nauc_mrr_at_1_diff1
value: 23.8138
- type: nauc_mrr_at_3_max
value: 14.1404
- type: nauc_mrr_at_3_std
value: 13.8728
- type: nauc_mrr_at_3_diff1
value: 20.898600000000002
- type: nauc_mrr_at_5_max
value: 15.0032
- type: nauc_mrr_at_5_std
value: 15.1412
- type: nauc_mrr_at_5_diff1
value: 21.0216
- type: nauc_mrr_at_10_max
value: 14.9212
- type: nauc_mrr_at_10_std
value: 15.836
- type: nauc_mrr_at_10_diff1
value: 20.9665
- type: nauc_mrr_at_20_max
value: 15.046399999999998
- type: nauc_mrr_at_20_std
value: 16.2257
- type: nauc_mrr_at_20_diff1
value: 20.816599999999998
- type: nauc_mrr_at_100_max
value: 15.0342
- type: nauc_mrr_at_100_std
value: 16.328899999999997
- type: nauc_mrr_at_100_diff1
value: 20.8347
- type: nauc_mrr_at_1000_max
value: 15.0313
- type: nauc_mrr_at_1000_std
value: 16.3027
- type: nauc_mrr_at_1000_diff1
value: 20.846
- type: main_score
value: 26.273999999999997
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrievalHardNegatives (ar)
type: mteb/miracl-hard-negatives
config: ar
split: dev
revision: 95c8db7d4a6e9c1d8a60601afd63d553ae20a2eb
metrics:
- type: ndcg_at_1
value: 20.7
- type: ndcg_at_3
value: 23.766000000000002
- type: ndcg_at_5
value: 26.479000000000003
- type: ndcg_at_10
value: 30.152
- type: ndcg_at_20
value: 33.123000000000005
- type: ndcg_at_100
value: 37.721
- type: ndcg_at_1000
value: 40.469
- type: map_at_1
value: 13.067
- type: map_at_3
value: 19.303
- type: map_at_5
value: 21.406
- type: map_at_10
value: 23.195
- type: map_at_20
value: 24.256
- type: map_at_100
value: 25.115
- type: map_at_1000
value: 25.257
- type: recall_at_1
value: 13.067
- type: recall_at_3
value: 25.663000000000004
- type: recall_at_5
value: 32.707
- type: recall_at_10
value: 42.458
- type: recall_at_20
value: 51.983000000000004
- type: recall_at_100
value: 72.509
- type: recall_at_1000
value: 90.62400000000001
- type: precision_at_1
value: 20.7
- type: precision_at_3
value: 14.366999999999999
- type: precision_at_5
value: 11.360000000000001
- type: precision_at_10
value: 7.68
- type: precision_at_20
value: 4.88
- type: precision_at_100
value: 1.413
- type: precision_at_1000
value: 0.179
- type: mrr_at_1
value: 20.7
- type: mrr_at_3
value: 27.750000000000004
- type: mrr_at_5
value: 29.659999999999997
- type: mrr_at_10
value: 31.072499999999998
- type: mrr_at_20
value: 31.680799999999998
- type: mrr_at_100
value: 32.0878
- type: mrr_at_1000
value: 32.1434
- type: nauc_ndcg_at_1_max
value: 9.268
- type: nauc_ndcg_at_1_std
value: 18.432000000000002
- type: nauc_ndcg_at_1_diff1
value: 20.2302
- type: nauc_ndcg_at_3_max
value: 10.9481
- type: nauc_ndcg_at_3_std
value: 16.919999999999998
- type: nauc_ndcg_at_3_diff1
value: 17.1518
- type: nauc_ndcg_at_5_max
value: 13.112499999999999
- type: nauc_ndcg_at_5_std
value: 19.4344
- type: nauc_ndcg_at_5_diff1
value: 16.994400000000002
- type: nauc_ndcg_at_10_max
value: 13.5807
- type: nauc_ndcg_at_10_std
value: 22.0576
- type: nauc_ndcg_at_10_diff1
value: 15.806700000000001
- type: nauc_ndcg_at_20_max
value: 15.038499999999999
- type: nauc_ndcg_at_20_std
value: 24.616699999999998
- type: nauc_ndcg_at_20_diff1
value: 15.0551
- type: nauc_ndcg_at_100_max
value: 16.4791
- type: nauc_ndcg_at_100_std
value: 27.3069
- type: nauc_ndcg_at_100_diff1
value: 15.3881
- type: nauc_ndcg_at_1000_max
value: 16.4607
- type: nauc_ndcg_at_1000_std
value: 27.2117
- type: nauc_ndcg_at_1000_diff1
value: 15.229000000000001
- type: nauc_map_at_1_max
value: 6.5943000000000005
- type: nauc_map_at_1_std
value: 13.303999999999998
- type: nauc_map_at_1_diff1
value: 21.8437
- type: nauc_map_at_3_max
value: 8.872399999999999
- type: nauc_map_at_3_std
value: 14.1544
- type: nauc_map_at_3_diff1
value: 18.2986
- type: nauc_map_at_5_max
value: 10.7963
- type: nauc_map_at_5_std
value: 16.2275
- type: nauc_map_at_5_diff1
value: 17.896
- type: nauc_map_at_10_max
value: 11.5053
- type: nauc_map_at_10_std
value: 17.9816
- type: nauc_map_at_10_diff1
value: 17.3155
- type: nauc_map_at_20_max
value: 12.3459
- type: nauc_map_at_20_std
value: 19.2359
- type: nauc_map_at_20_diff1
value: 16.868
- type: nauc_map_at_100_max
value: 12.753300000000001
- type: nauc_map_at_100_std
value: 20.0431
- type: nauc_map_at_100_diff1
value: 16.8889
- type: nauc_map_at_1000_max
value: 12.7747
- type: nauc_map_at_1000_std
value: 20.1047
- type: nauc_map_at_1000_diff1
value: 16.883699999999997
- type: nauc_recall_at_1_max
value: 6.5943000000000005
- type: nauc_recall_at_1_std
value: 13.303999999999998
- type: nauc_recall_at_1_diff1
value: 21.8437
- type: nauc_recall_at_3_max
value: 8.7966
- type: nauc_recall_at_3_std
value: 12.7517
- type: nauc_recall_at_3_diff1
value: 15.1844
- type: nauc_recall_at_5_max
value: 12.9126
- type: nauc_recall_at_5_std
value: 17.4967
- type: nauc_recall_at_5_diff1
value: 13.9756
- type: nauc_recall_at_10_max
value: 12.3656
- type: nauc_recall_at_10_std
value: 21.7246
- type: nauc_recall_at_10_diff1
value: 10.6946
- type: nauc_recall_at_20_max
value: 15.9849
- type: nauc_recall_at_20_std
value: 28.2084
- type: nauc_recall_at_20_diff1
value: 9.3399
- type: nauc_recall_at_100_max
value: 22.4235
- type: nauc_recall_at_100_std
value: 41.6796
- type: nauc_recall_at_100_diff1
value: 11.3943
- type: nauc_recall_at_1000_max
value: 33.9199
- type: nauc_recall_at_1000_std
value: 63.458800000000004
- type: nauc_recall_at_1000_diff1
value: 5.1713000000000005
- type: nauc_precision_at_1_max
value: 9.268
- type: nauc_precision_at_1_std
value: 18.432000000000002
- type: nauc_precision_at_1_diff1
value: 20.2302
- type: nauc_precision_at_3_max
value: 16.1989
- type: nauc_precision_at_3_std
value: 22.823
- type: nauc_precision_at_3_diff1
value: 12.8433
- type: nauc_precision_at_5_max
value: 20.9029
- type: nauc_precision_at_5_std
value: 27.609099999999998
- type: nauc_precision_at_5_diff1
value: 10.501000000000001
- type: nauc_precision_at_10_max
value: 22.0715
- type: nauc_precision_at_10_std
value: 32.2903
- type: nauc_precision_at_10_diff1
value: 7.1502
- type: nauc_precision_at_20_max
value: 23.1036
- type: nauc_precision_at_20_std
value: 34.955000000000005
- type: nauc_precision_at_20_diff1
value: 2.5075
- type: nauc_precision_at_100_max
value: 23.8401
- type: nauc_precision_at_100_std
value: 35.5452
- type: nauc_precision_at_100_diff1
value: -0.3836
- type: nauc_precision_at_1000_max
value: 18.519199999999998
- type: nauc_precision_at_1000_std
value: 27.2343
- type: nauc_precision_at_1000_diff1
value: -4.26
- type: nauc_mrr_at_1_max
value: 9.268
- type: nauc_mrr_at_1_std
value: 18.432000000000002
- type: nauc_mrr_at_1_diff1
value: 20.2302
- type: nauc_mrr_at_3_max
value: 12.9175
- type: nauc_mrr_at_3_std
value: 21.610599999999998
- type: nauc_mrr_at_3_diff1
value: 17.6036
- type: nauc_mrr_at_5_max
value: 13.761000000000001
- type: nauc_mrr_at_5_std
value: 23.091
- type: nauc_mrr_at_5_diff1
value: 17.217
- type: nauc_mrr_at_10_max
value: 13.788400000000001
- type: nauc_mrr_at_10_std
value: 23.91
- type: nauc_mrr_at_10_diff1
value: 16.847
- type: nauc_mrr_at_20_max
value: 13.689499999999999
- type: nauc_mrr_at_20_std
value: 23.976
- type: nauc_mrr_at_20_diff1
value: 16.845499999999998
- type: nauc_mrr_at_100_max
value: 13.712
- type: nauc_mrr_at_100_std
value: 24.0657
- type: nauc_mrr_at_100_diff1
value: 16.852800000000002
- type: nauc_mrr_at_1000_max
value: 13.7073
- type: nauc_mrr_at_1000_std
value: 24.046300000000002
- type: nauc_mrr_at_1000_diff1
value: 16.8626
- type: main_score
value: 30.152
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-ara)
type: facebook/mlqa
config: ara-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 51.451
- type: ndcg_at_3
value: 60.302
- type: ndcg_at_5
value: 62.432
- type: ndcg_at_10
value: 63.541000000000004
- type: ndcg_at_20
value: 64.82
- type: ndcg_at_100
value: 67.54599999999999
- type: ndcg_at_1000
value: 68.161
- type: map_at_1
value: 51.451
- type: map_at_3
value: 58.026999999999994
- type: map_at_5
value: 59.197
- type: map_at_10
value: 59.644
- type: map_at_20
value: 59.999
- type: map_at_100
value: 60.375
- type: map_at_1000
value: 60.401
- type: recall_at_1
value: 51.451
- type: recall_at_3
value: 66.925
- type: recall_at_5
value: 72.14699999999999
- type: recall_at_10
value: 75.629
- type: recall_at_20
value: 80.658
- type: recall_at_100
value: 95.358
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 51.451
- type: precision_at_3
value: 22.308
- type: precision_at_5
value: 14.429
- type: precision_at_10
value: 7.563000000000001
- type: precision_at_20
value: 4.0329999999999995
- type: precision_at_100
value: 0.954
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 51.450700000000005
- type: mrr_at_3
value: 58.0271
- type: mrr_at_5
value: 59.1973
- type: mrr_at_10
value: 59.6441
- type: mrr_at_20
value: 59.999100000000006
- type: mrr_at_100
value: 60.3751
- type: mrr_at_1000
value: 60.401
- type: nauc_ndcg_at_1_max
value: 46.2584
- type: nauc_ndcg_at_1_std
value: 9.1712
- type: nauc_ndcg_at_1_diff1
value: 61.232299999999995
- type: nauc_ndcg_at_3_max
value: 53.9072
- type: nauc_ndcg_at_3_std
value: 18.9815
- type: nauc_ndcg_at_3_diff1
value: 59.8943
- type: nauc_ndcg_at_5_max
value: 54.5939
- type: nauc_ndcg_at_5_std
value: 20.9544
- type: nauc_ndcg_at_5_diff1
value: 58.500600000000006
- type: nauc_ndcg_at_10_max
value: 54.010999999999996
- type: nauc_ndcg_at_10_std
value: 21.0626
- type: nauc_ndcg_at_10_diff1
value: 58.15820000000001
- type: nauc_ndcg_at_20_max
value: 53.339400000000005
- type: nauc_ndcg_at_20_std
value: 19.526699999999998
- type: nauc_ndcg_at_20_diff1
value: 57.8706
- type: nauc_ndcg_at_100_max
value: 52.7445
- type: nauc_ndcg_at_100_std
value: 18.756500000000003
- type: nauc_ndcg_at_100_diff1
value: 58.919900000000005
- type: nauc_ndcg_at_1000_max
value: 52.607899999999994
- type: nauc_ndcg_at_1000_std
value: 18.409
- type: nauc_ndcg_at_1000_diff1
value: 58.981300000000005
- type: nauc_map_at_1_max
value: 46.2584
- type: nauc_map_at_1_std
value: 9.1712
- type: nauc_map_at_1_diff1
value: 61.232299999999995
- type: nauc_map_at_3_max
value: 51.8763
- type: nauc_map_at_3_std
value: 16.366
- type: nauc_map_at_3_diff1
value: 60.0428
- type: nauc_map_at_5_max
value: 52.1957
- type: nauc_map_at_5_std
value: 17.354
- type: nauc_map_at_5_diff1
value: 59.3285
- type: nauc_map_at_10_max
value: 51.9592
- type: nauc_map_at_10_std
value: 17.368
- type: nauc_map_at_10_diff1
value: 59.21419999999999
- type: nauc_map_at_20_max
value: 51.78040000000001
- type: nauc_map_at_20_std
value: 16.947000000000003
- type: nauc_map_at_20_diff1
value: 59.1612
- type: nauc_map_at_100_max
value: 51.7167
- type: nauc_map_at_100_std
value: 16.8964
- type: nauc_map_at_100_diff1
value: 59.336
- type: nauc_map_at_1000_max
value: 51.711600000000004
- type: nauc_map_at_1000_std
value: 16.8858
- type: nauc_map_at_1000_diff1
value: 59.337700000000005
- type: nauc_recall_at_1_max
value: 46.2584
- type: nauc_recall_at_1_std
value: 9.1712
- type: nauc_recall_at_1_diff1
value: 61.232299999999995
- type: nauc_recall_at_3_max
value: 60.6484
- type: nauc_recall_at_3_std
value: 27.6682
- type: nauc_recall_at_3_diff1
value: 59.49870000000001
- type: nauc_recall_at_5_max
value: 63.5264
- type: nauc_recall_at_5_std
value: 34.5355
- type: nauc_recall_at_5_diff1
value: 55.2913
- type: nauc_recall_at_10_max
value: 62.1038
- type: nauc_recall_at_10_std
value: 36.4565
- type: nauc_recall_at_10_diff1
value: 53.4771
- type: nauc_recall_at_20_max
value: 59.6506
- type: nauc_recall_at_20_std
value: 30.444300000000002
- type: nauc_recall_at_20_diff1
value: 50.6836
- type: nauc_recall_at_100_max
value: 58.4695
- type: nauc_recall_at_100_std
value: 33.5819
- type: nauc_recall_at_100_diff1
value: 56.2667
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 46.2584
- type: nauc_precision_at_1_std
value: 9.1712
- type: nauc_precision_at_1_diff1
value: 61.232299999999995
- type: nauc_precision_at_3_max
value: 60.6484
- type: nauc_precision_at_3_std
value: 27.6682
- type: nauc_precision_at_3_diff1
value: 59.49870000000001
- type: nauc_precision_at_5_max
value: 63.5264
- type: nauc_precision_at_5_std
value: 34.5355
- type: nauc_precision_at_5_diff1
value: 55.2913
- type: nauc_precision_at_10_max
value: 62.1038
- type: nauc_precision_at_10_std
value: 36.4565
- type: nauc_precision_at_10_diff1
value: 53.4771
- type: nauc_precision_at_20_max
value: 59.6506
- type: nauc_precision_at_20_std
value: 30.444300000000002
- type: nauc_precision_at_20_diff1
value: 50.6836
- type: nauc_precision_at_100_max
value: 58.4695
- type: nauc_precision_at_100_std
value: 33.5819
- type: nauc_precision_at_100_diff1
value: 56.2667
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 46.2584
- type: nauc_mrr_at_1_std
value: 9.1712
- type: nauc_mrr_at_1_diff1
value: 61.232299999999995
- type: nauc_mrr_at_3_max
value: 51.8763
- type: nauc_mrr_at_3_std
value: 16.366
- type: nauc_mrr_at_3_diff1
value: 60.0428
- type: nauc_mrr_at_5_max
value: 52.1957
- type: nauc_mrr_at_5_std
value: 17.354
- type: nauc_mrr_at_5_diff1
value: 59.3285
- type: nauc_mrr_at_10_max
value: 51.9592
- type: nauc_mrr_at_10_std
value: 17.368
- type: nauc_mrr_at_10_diff1
value: 59.21419999999999
- type: nauc_mrr_at_20_max
value: 51.78040000000001
- type: nauc_mrr_at_20_std
value: 16.947000000000003
- type: nauc_mrr_at_20_diff1
value: 59.1612
- type: nauc_mrr_at_100_max
value: 51.7167
- type: nauc_mrr_at_100_std
value: 16.8964
- type: nauc_mrr_at_100_diff1
value: 59.336
- type: nauc_mrr_at_1000_max
value: 51.711600000000004
- type: nauc_mrr_at_1000_std
value: 16.8858
- type: nauc_mrr_at_1000_diff1
value: 59.337700000000005
- type: main_score
value: 63.541000000000004
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-deu)
type: facebook/mlqa
config: ara-deu
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 51.690999999999995
- type: ndcg_at_3
value: 63.365
- type: ndcg_at_5
value: 65.922
- type: ndcg_at_10
value: 67.949
- type: ndcg_at_20
value: 69.733
- type: ndcg_at_100
value: 71.285
- type: ndcg_at_1000
value: 71.355
- type: map_at_1
value: 51.690999999999995
- type: map_at_3
value: 60.548
- type: map_at_5
value: 61.948
- type: map_at_10
value: 62.78399999999999
- type: map_at_20
value: 63.248000000000005
- type: map_at_100
value: 63.471999999999994
- type: map_at_1000
value: 63.476
- type: recall_at_1
value: 51.690999999999995
- type: recall_at_3
value: 71.49799999999999
- type: recall_at_5
value: 77.778
- type: recall_at_10
value: 84.05799999999999
- type: recall_at_20
value: 91.304
- type: recall_at_100
value: 99.517
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 51.690999999999995
- type: precision_at_3
value: 23.833
- type: precision_at_5
value: 15.556000000000001
- type: precision_at_10
value: 8.405999999999999
- type: precision_at_20
value: 4.565
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 51.6908
- type: mrr_at_3
value: 60.5475
- type: mrr_at_5
value: 61.948499999999996
- type: mrr_at_10
value: 62.7845
- type: mrr_at_20
value: 63.2478
- type: mrr_at_100
value: 63.471599999999995
- type: mrr_at_1000
value: 63.4757
- type: nauc_ndcg_at_1_max
value: 48.6122
- type: nauc_ndcg_at_1_std
value: 18.3707
- type: nauc_ndcg_at_1_diff1
value: 65.9894
- type: nauc_ndcg_at_3_max
value: 56.2928
- type: nauc_ndcg_at_3_std
value: 27.526899999999998
- type: nauc_ndcg_at_3_diff1
value: 56.5762
- type: nauc_ndcg_at_5_max
value: 56.594199999999994
- type: nauc_ndcg_at_5_std
value: 29.916500000000003
- type: nauc_ndcg_at_5_diff1
value: 56.1361
- type: nauc_ndcg_at_10_max
value: 58.07
- type: nauc_ndcg_at_10_std
value: 29.687400000000004
- type: nauc_ndcg_at_10_diff1
value: 58.537099999999995
- type: nauc_ndcg_at_20_max
value: 57.4515
- type: nauc_ndcg_at_20_std
value: 29.8421
- type: nauc_ndcg_at_20_diff1
value: 58.796499999999995
- type: nauc_ndcg_at_100_max
value: 55.8115
- type: nauc_ndcg_at_100_std
value: 27.851300000000002
- type: nauc_ndcg_at_100_diff1
value: 59.395399999999995
- type: nauc_ndcg_at_1000_max
value: 55.671800000000005
- type: nauc_ndcg_at_1000_std
value: 27.6646
- type: nauc_ndcg_at_1000_diff1
value: 59.3548
- type: nauc_map_at_1_max
value: 48.6122
- type: nauc_map_at_1_std
value: 18.3707
- type: nauc_map_at_1_diff1
value: 65.9894
- type: nauc_map_at_3_max
value: 54.278000000000006
- type: nauc_map_at_3_std
value: 25.3062
- type: nauc_map_at_3_diff1
value: 59.0998
- type: nauc_map_at_5_max
value: 54.38269999999999
- type: nauc_map_at_5_std
value: 26.451400000000003
- type: nauc_map_at_5_diff1
value: 59.0233
- type: nauc_map_at_10_max
value: 54.915000000000006
- type: nauc_map_at_10_std
value: 26.3247
- type: nauc_map_at_10_diff1
value: 59.939
- type: nauc_map_at_20_max
value: 54.760600000000004
- type: nauc_map_at_20_std
value: 26.3843
- type: nauc_map_at_20_diff1
value: 60.019800000000004
- type: nauc_map_at_100_max
value: 54.548700000000004
- type: nauc_map_at_100_std
value: 26.167099999999998
- type: nauc_map_at_100_diff1
value: 60.091499999999996
- type: nauc_map_at_1000_max
value: 54.542
- type: nauc_map_at_1000_std
value: 26.158199999999997
- type: nauc_map_at_1000_diff1
value: 60.0897
- type: nauc_recall_at_1_max
value: 48.6122
- type: nauc_recall_at_1_std
value: 18.3707
- type: nauc_recall_at_1_diff1
value: 65.9894
- type: nauc_recall_at_3_max
value: 63.3309
- type: nauc_recall_at_3_std
value: 35.1892
- type: nauc_recall_at_3_diff1
value: 47.732200000000006
- type: nauc_recall_at_5_max
value: 65.7603
- type: nauc_recall_at_5_std
value: 44.6445
- type: nauc_recall_at_5_diff1
value: 43.9624
- type: nauc_recall_at_10_max
value: 76.059
- type: nauc_recall_at_10_std
value: 48.0321
- type: nauc_recall_at_10_diff1
value: 52.642999999999994
- type: nauc_recall_at_20_max
value: 81.92160000000001
- type: nauc_recall_at_20_std
value: 61.57040000000001
- type: nauc_recall_at_20_diff1
value: 51.0182
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: 86.907
- type: nauc_recall_at_100_diff1
value: 72.2366
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 48.6122
- type: nauc_precision_at_1_std
value: 18.3707
- type: nauc_precision_at_1_diff1
value: 65.9894
- type: nauc_precision_at_3_max
value: 63.3309
- type: nauc_precision_at_3_std
value: 35.1892
- type: nauc_precision_at_3_diff1
value: 47.732200000000006
- type: nauc_precision_at_5_max
value: 65.7603
- type: nauc_precision_at_5_std
value: 44.6445
- type: nauc_precision_at_5_diff1
value: 43.9624
- type: nauc_precision_at_10_max
value: 76.059
- type: nauc_precision_at_10_std
value: 48.0321
- type: nauc_precision_at_10_diff1
value: 52.642999999999994
- type: nauc_precision_at_20_max
value: 81.92160000000001
- type: nauc_precision_at_20_std
value: 61.57040000000001
- type: nauc_precision_at_20_diff1
value: 51.0182
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 86.907
- type: nauc_precision_at_100_diff1
value: 72.2366
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 48.6122
- type: nauc_mrr_at_1_std
value: 18.3707
- type: nauc_mrr_at_1_diff1
value: 65.9894
- type: nauc_mrr_at_3_max
value: 54.278000000000006
- type: nauc_mrr_at_3_std
value: 25.3062
- type: nauc_mrr_at_3_diff1
value: 59.0998
- type: nauc_mrr_at_5_max
value: 54.38269999999999
- type: nauc_mrr_at_5_std
value: 26.451400000000003
- type: nauc_mrr_at_5_diff1
value: 59.0233
- type: nauc_mrr_at_10_max
value: 54.915000000000006
- type: nauc_mrr_at_10_std
value: 26.3247
- type: nauc_mrr_at_10_diff1
value: 59.939
- type: nauc_mrr_at_20_max
value: 54.760600000000004
- type: nauc_mrr_at_20_std
value: 26.3843
- type: nauc_mrr_at_20_diff1
value: 60.019800000000004
- type: nauc_mrr_at_100_max
value: 54.548700000000004
- type: nauc_mrr_at_100_std
value: 26.167099999999998
- type: nauc_mrr_at_100_diff1
value: 60.091499999999996
- type: nauc_mrr_at_1000_max
value: 54.542
- type: nauc_mrr_at_1000_std
value: 26.158199999999997
- type: nauc_mrr_at_1000_diff1
value: 60.0897
- type: main_score
value: 67.949
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-eng)
type: facebook/mlqa
config: ara-eng
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 51.837999999999994
- type: ndcg_at_3
value: 61.207
- type: ndcg_at_5
value: 63.57000000000001
- type: ndcg_at_10
value: 65.679
- type: ndcg_at_20
value: 67.296
- type: ndcg_at_100
value: 69.298
- type: ndcg_at_1000
value: 69.68299999999999
- type: map_at_1
value: 51.837999999999994
- type: map_at_3
value: 58.897
- type: map_at_5
value: 60.193
- type: map_at_10
value: 61.053000000000004
- type: map_at_20
value: 61.499
- type: map_at_100
value: 61.79900000000001
- type: map_at_1000
value: 61.815
- type: recall_at_1
value: 51.837999999999994
- type: recall_at_3
value: 67.892
- type: recall_at_5
value: 73.694
- type: recall_at_10
value: 80.271
- type: recall_at_20
value: 86.654
- type: recall_at_100
value: 97.099
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 51.837999999999994
- type: precision_at_3
value: 22.631
- type: precision_at_5
value: 14.738999999999999
- type: precision_at_10
value: 8.027
- type: precision_at_20
value: 4.333
- type: precision_at_100
value: 0.971
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 51.837500000000006
- type: mrr_at_3
value: 58.8975
- type: mrr_at_5
value: 60.1934
- type: mrr_at_10
value: 61.0533
- type: mrr_at_20
value: 61.498799999999996
- type: mrr_at_100
value: 61.7987
- type: mrr_at_1000
value: 61.8154
- type: nauc_ndcg_at_1_max
value: 52.8818
- type: nauc_ndcg_at_1_std
value: 2.2309
- type: nauc_ndcg_at_1_diff1
value: 67.1852
- type: nauc_ndcg_at_3_max
value: 57.75789999999999
- type: nauc_ndcg_at_3_std
value: 8.4361
- type: nauc_ndcg_at_3_diff1
value: 60.3313
- type: nauc_ndcg_at_5_max
value: 58.845000000000006
- type: nauc_ndcg_at_5_std
value: 10.3892
- type: nauc_ndcg_at_5_diff1
value: 59.6225
- type: nauc_ndcg_at_10_max
value: 58.440999999999995
- type: nauc_ndcg_at_10_std
value: 10.245
- type: nauc_ndcg_at_10_diff1
value: 60.3544
- type: nauc_ndcg_at_20_max
value: 58.0517
- type: nauc_ndcg_at_20_std
value: 9.229
- type: nauc_ndcg_at_20_diff1
value: 60.4508
- type: nauc_ndcg_at_100_max
value: 57.6593
- type: nauc_ndcg_at_100_std
value: 9.1281
- type: nauc_ndcg_at_100_diff1
value: 61.107299999999995
- type: nauc_ndcg_at_1000_max
value: 57.301100000000005
- type: nauc_ndcg_at_1000_std
value: 8.3789
- type: nauc_ndcg_at_1000_diff1
value: 61.433899999999994
- type: nauc_map_at_1_max
value: 52.8818
- type: nauc_map_at_1_std
value: 2.2309
- type: nauc_map_at_1_diff1
value: 67.1852
- type: nauc_map_at_3_max
value: 56.5338
- type: nauc_map_at_3_std
value: 6.6754999999999995
- type: nauc_map_at_3_diff1
value: 62.195299999999996
- type: nauc_map_at_5_max
value: 56.990300000000005
- type: nauc_map_at_5_std
value: 7.5465
- type: nauc_map_at_5_diff1
value: 61.898399999999995
- type: nauc_map_at_10_max
value: 56.7918
- type: nauc_map_at_10_std
value: 7.446400000000001
- type: nauc_map_at_10_diff1
value: 62.218399999999995
- type: nauc_map_at_20_max
value: 56.666399999999996
- type: nauc_map_at_20_std
value: 7.133399999999999
- type: nauc_map_at_20_diff1
value: 62.2684
- type: nauc_map_at_100_max
value: 56.60380000000001
- type: nauc_map_at_100_std
value: 7.143800000000001
- type: nauc_map_at_100_diff1
value: 62.332100000000004
- type: nauc_map_at_1000_max
value: 56.5913
- type: nauc_map_at_1000_std
value: 7.1212
- type: nauc_map_at_1000_diff1
value: 62.3459
- type: nauc_recall_at_1_max
value: 52.8818
- type: nauc_recall_at_1_std
value: 2.2309
- type: nauc_recall_at_1_diff1
value: 67.1852
- type: nauc_recall_at_3_max
value: 61.804
- type: nauc_recall_at_3_std
value: 14.3574
- type: nauc_recall_at_3_diff1
value: 54.0982
- type: nauc_recall_at_5_max
value: 66.14320000000001
- type: nauc_recall_at_5_std
value: 21.7224
- type: nauc_recall_at_5_diff1
value: 50.83259999999999
- type: nauc_recall_at_10_max
value: 66.2602
- type: nauc_recall_at_10_std
value: 23.880399999999998
- type: nauc_recall_at_10_diff1
value: 51.8906
- type: nauc_recall_at_20_max
value: 66.73219999999999
- type: nauc_recall_at_20_std
value: 22.267799999999998
- type: nauc_recall_at_20_diff1
value: 49.0047
- type: nauc_recall_at_100_max
value: 79.71249999999999
- type: nauc_recall_at_100_std
value: 56.6461
- type: nauc_recall_at_100_diff1
value: 41.9666
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 52.8818
- type: nauc_precision_at_1_std
value: 2.2309
- type: nauc_precision_at_1_diff1
value: 67.1852
- type: nauc_precision_at_3_max
value: 61.804
- type: nauc_precision_at_3_std
value: 14.3574
- type: nauc_precision_at_3_diff1
value: 54.0982
- type: nauc_precision_at_5_max
value: 66.14320000000001
- type: nauc_precision_at_5_std
value: 21.7224
- type: nauc_precision_at_5_diff1
value: 50.83259999999999
- type: nauc_precision_at_10_max
value: 66.2602
- type: nauc_precision_at_10_std
value: 23.880399999999998
- type: nauc_precision_at_10_diff1
value: 51.8906
- type: nauc_precision_at_20_max
value: 66.73219999999999
- type: nauc_precision_at_20_std
value: 22.267799999999998
- type: nauc_precision_at_20_diff1
value: 49.0047
- type: nauc_precision_at_100_max
value: 79.71249999999999
- type: nauc_precision_at_100_std
value: 56.6461
- type: nauc_precision_at_100_diff1
value: 41.9666
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 52.8818
- type: nauc_mrr_at_1_std
value: 2.2309
- type: nauc_mrr_at_1_diff1
value: 67.1852
- type: nauc_mrr_at_3_max
value: 56.5338
- type: nauc_mrr_at_3_std
value: 6.6754999999999995
- type: nauc_mrr_at_3_diff1
value: 62.195299999999996
- type: nauc_mrr_at_5_max
value: 56.990300000000005
- type: nauc_mrr_at_5_std
value: 7.5465
- type: nauc_mrr_at_5_diff1
value: 61.898399999999995
- type: nauc_mrr_at_10_max
value: 56.7918
- type: nauc_mrr_at_10_std
value: 7.446400000000001
- type: nauc_mrr_at_10_diff1
value: 62.218399999999995
- type: nauc_mrr_at_20_max
value: 56.666399999999996
- type: nauc_mrr_at_20_std
value: 7.133399999999999
- type: nauc_mrr_at_20_diff1
value: 62.2684
- type: nauc_mrr_at_100_max
value: 56.60380000000001
- type: nauc_mrr_at_100_std
value: 7.143800000000001
- type: nauc_mrr_at_100_diff1
value: 62.332100000000004
- type: nauc_mrr_at_1000_max
value: 56.5913
- type: nauc_mrr_at_1000_std
value: 7.1212
- type: nauc_mrr_at_1000_diff1
value: 62.3459
- type: main_score
value: 65.679
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-spa)
type: facebook/mlqa
config: ara-spa
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 60.248000000000005
- type: ndcg_at_3
value: 69.247
- type: ndcg_at_5
value: 72.26599999999999
- type: ndcg_at_10
value: 73.994
- type: ndcg_at_20
value: 75.24300000000001
- type: ndcg_at_100
value: 76.547
- type: ndcg_at_1000
value: 76.547
- type: map_at_1
value: 60.248000000000005
- type: map_at_3
value: 67.184
- type: map_at_5
value: 68.83
- type: map_at_10
value: 69.49600000000001
- type: map_at_20
value: 69.83500000000001
- type: map_at_100
value: 70.031
- type: map_at_1000
value: 70.031
- type: recall_at_1
value: 60.248000000000005
- type: recall_at_3
value: 75.155
- type: recall_at_5
value: 82.609
- type: recall_at_10
value: 88.19900000000001
- type: recall_at_20
value: 93.16799999999999
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 60.248000000000005
- type: precision_at_3
value: 25.052000000000003
- type: precision_at_5
value: 16.522000000000002
- type: precision_at_10
value: 8.82
- type: precision_at_20
value: 4.658
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 60.248400000000004
- type: mrr_at_3
value: 67.1843
- type: mrr_at_5
value: 68.83019999999999
- type: mrr_at_10
value: 69.49600000000001
- type: mrr_at_20
value: 69.8345
- type: mrr_at_100
value: 70.03049999999999
- type: mrr_at_1000
value: 70.03049999999999
- type: nauc_ndcg_at_1_max
value: 51.1706
- type: nauc_ndcg_at_1_std
value: -8.1716
- type: nauc_ndcg_at_1_diff1
value: 73.443
- type: nauc_ndcg_at_3_max
value: 61.9764
- type: nauc_ndcg_at_3_std
value: 4.0499
- type: nauc_ndcg_at_3_diff1
value: 67.49589999999999
- type: nauc_ndcg_at_5_max
value: 60.4749
- type: nauc_ndcg_at_5_std
value: 8.561399999999999
- type: nauc_ndcg_at_5_diff1
value: 65.4543
- type: nauc_ndcg_at_10_max
value: 61.6645
- type: nauc_ndcg_at_10_std
value: 8.186200000000001
- type: nauc_ndcg_at_10_diff1
value: 67.3523
- type: nauc_ndcg_at_20_max
value: 60.9429
- type: nauc_ndcg_at_20_std
value: 7.7970999999999995
- type: nauc_ndcg_at_20_diff1
value: 67.1078
- type: nauc_ndcg_at_100_max
value: 59.452400000000004
- type: nauc_ndcg_at_100_std
value: 4.6432
- type: nauc_ndcg_at_100_diff1
value: 68.0564
- type: nauc_ndcg_at_1000_max
value: 59.452400000000004
- type: nauc_ndcg_at_1000_std
value: 4.6432
- type: nauc_ndcg_at_1000_diff1
value: 68.0564
- type: nauc_map_at_1_max
value: 51.1706
- type: nauc_map_at_1_std
value: -8.1716
- type: nauc_map_at_1_diff1
value: 73.443
- type: nauc_map_at_3_max
value: 59.385299999999994
- type: nauc_map_at_3_std
value: 1.1125
- type: nauc_map_at_3_diff1
value: 68.9884
- type: nauc_map_at_5_max
value: 58.473600000000005
- type: nauc_map_at_5_std
value: 3.273
- type: nauc_map_at_5_diff1
value: 68.0102
- type: nauc_map_at_10_max
value: 58.869899999999994
- type: nauc_map_at_10_std
value: 3.1175
- type: nauc_map_at_10_diff1
value: 68.7308
- type: nauc_map_at_20_max
value: 58.6638
- type: nauc_map_at_20_std
value: 2.9529
- type: nauc_map_at_20_diff1
value: 68.6787
- type: nauc_map_at_100_max
value: 58.465
- type: nauc_map_at_100_std
value: 2.5943
- type: nauc_map_at_100_diff1
value: 68.7955
- type: nauc_map_at_1000_max
value: 58.465
- type: nauc_map_at_1000_std
value: 2.5943
- type: nauc_map_at_1000_diff1
value: 68.7955
- type: nauc_recall_at_1_max
value: 51.1706
- type: nauc_recall_at_1_std
value: -8.1716
- type: nauc_recall_at_1_diff1
value: 73.443
- type: nauc_recall_at_3_max
value: 70.9051
- type: nauc_recall_at_3_std
value: 14.1759
- type: nauc_recall_at_3_diff1
value: 62.3143
- type: nauc_recall_at_5_max
value: 68.99159999999999
- type: nauc_recall_at_5_std
value: 33.226499999999994
- type: nauc_recall_at_5_diff1
value: 53.53790000000001
- type: nauc_recall_at_10_max
value: 79.36149999999999
- type: nauc_recall_at_10_std
value: 40.149
- type: nauc_recall_at_10_diff1
value: 59.90220000000001
- type: nauc_recall_at_20_max
value: 83.0489
- type: nauc_recall_at_20_std
value: 57.8707
- type: nauc_recall_at_20_diff1
value: 52.1552
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 51.1706
- type: nauc_precision_at_1_std
value: -8.1716
- type: nauc_precision_at_1_diff1
value: 73.443
- type: nauc_precision_at_3_max
value: 70.9051
- type: nauc_precision_at_3_std
value: 14.1759
- type: nauc_precision_at_3_diff1
value: 62.3143
- type: nauc_precision_at_5_max
value: 68.99159999999999
- type: nauc_precision_at_5_std
value: 33.226499999999994
- type: nauc_precision_at_5_diff1
value: 53.53790000000001
- type: nauc_precision_at_10_max
value: 79.36149999999999
- type: nauc_precision_at_10_std
value: 40.149
- type: nauc_precision_at_10_diff1
value: 59.90220000000001
- type: nauc_precision_at_20_max
value: 83.0489
- type: nauc_precision_at_20_std
value: 57.8707
- type: nauc_precision_at_20_diff1
value: 52.1552
- type: nauc_precision_at_100_max
value: .nan
- type: nauc_precision_at_100_std
value: .nan
- type: nauc_precision_at_100_diff1
value: .nan
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 51.1706
- type: nauc_mrr_at_1_std
value: -8.1716
- type: nauc_mrr_at_1_diff1
value: 73.443
- type: nauc_mrr_at_3_max
value: 59.385299999999994
- type: nauc_mrr_at_3_std
value: 1.1125
- type: nauc_mrr_at_3_diff1
value: 68.9884
- type: nauc_mrr_at_5_max
value: 58.473600000000005
- type: nauc_mrr_at_5_std
value: 3.273
- type: nauc_mrr_at_5_diff1
value: 68.0102
- type: nauc_mrr_at_10_max
value: 58.869899999999994
- type: nauc_mrr_at_10_std
value: 3.1175
- type: nauc_mrr_at_10_diff1
value: 68.7308
- type: nauc_mrr_at_20_max
value: 58.6638
- type: nauc_mrr_at_20_std
value: 2.9529
- type: nauc_mrr_at_20_diff1
value: 68.6787
- type: nauc_mrr_at_100_max
value: 58.465
- type: nauc_mrr_at_100_std
value: 2.5943
- type: nauc_mrr_at_100_diff1
value: 68.7955
- type: nauc_mrr_at_1000_max
value: 58.465
- type: nauc_mrr_at_1000_std
value: 2.5943
- type: nauc_mrr_at_1000_diff1
value: 68.7955
- type: main_score
value: 73.994
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-hin)
type: facebook/mlqa
config: ara-hin
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 54.301
- type: ndcg_at_3
value: 65.598
- type: ndcg_at_5
value: 68.46600000000001
- type: ndcg_at_10
value: 70.511
- type: ndcg_at_20
value: 71.58200000000001
- type: ndcg_at_100
value: 73.014
- type: ndcg_at_1000
value: 73.165
- type: map_at_1
value: 54.301
- type: map_at_3
value: 62.814
- type: map_at_5
value: 64.4
- type: map_at_10
value: 65.21900000000001
- type: map_at_20
value: 65.503
- type: map_at_100
value: 65.712
- type: map_at_1000
value: 65.72
- type: recall_at_1
value: 54.301
- type: recall_at_3
value: 73.656
- type: recall_at_5
value: 80.645
- type: recall_at_10
value: 87.09700000000001
- type: recall_at_20
value: 91.398
- type: recall_at_100
value: 98.925
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 54.301
- type: precision_at_3
value: 24.552
- type: precision_at_5
value: 16.128999999999998
- type: precision_at_10
value: 8.709999999999999
- type: precision_at_20
value: 4.569999999999999
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 54.301100000000005
- type: mrr_at_3
value: 62.8136
- type: mrr_at_5
value: 64.3996
- type: mrr_at_10
value: 65.2187
- type: mrr_at_20
value: 65.5029
- type: mrr_at_100
value: 65.71209999999999
- type: mrr_at_1000
value: 65.72
- type: nauc_ndcg_at_1_max
value: 53.0712
- type: nauc_ndcg_at_1_std
value: 3.4898
- type: nauc_ndcg_at_1_diff1
value: 66.2941
- type: nauc_ndcg_at_3_max
value: 59.7553
- type: nauc_ndcg_at_3_std
value: 12.1777
- type: nauc_ndcg_at_3_diff1
value: 62.923399999999994
- type: nauc_ndcg_at_5_max
value: 59.16630000000001
- type: nauc_ndcg_at_5_std
value: 11.998899999999999
- type: nauc_ndcg_at_5_diff1
value: 61.015699999999995
- type: nauc_ndcg_at_10_max
value: 59.5264
- type: nauc_ndcg_at_10_std
value: 14.9617
- type: nauc_ndcg_at_10_diff1
value: 62.1769
- type: nauc_ndcg_at_20_max
value: 59.5248
- type: nauc_ndcg_at_20_std
value: 13.4521
- type: nauc_ndcg_at_20_diff1
value: 63.1046
- type: nauc_ndcg_at_100_max
value: 58.8175
- type: nauc_ndcg_at_100_std
value: 12.1264
- type: nauc_ndcg_at_100_diff1
value: 63.231
- type: nauc_ndcg_at_1000_max
value: 58.571200000000005
- type: nauc_ndcg_at_1000_std
value: 11.6462
- type: nauc_ndcg_at_1000_diff1
value: 63.166900000000005
- type: nauc_map_at_1_max
value: 53.0712
- type: nauc_map_at_1_std
value: 3.4898
- type: nauc_map_at_1_diff1
value: 66.2941
- type: nauc_map_at_3_max
value: 58.0839
- type: nauc_map_at_3_std
value: 9.8015
- type: nauc_map_at_3_diff1
value: 63.7764
- type: nauc_map_at_5_max
value: 57.7643
- type: nauc_map_at_5_std
value: 9.661200000000001
- type: nauc_map_at_5_diff1
value: 62.8703
- type: nauc_map_at_10_max
value: 57.92230000000001
- type: nauc_map_at_10_std
value: 10.7513
- type: nauc_map_at_10_diff1
value: 63.282700000000006
- type: nauc_map_at_20_max
value: 57.898
- type: nauc_map_at_20_std
value: 10.3559
- type: nauc_map_at_20_diff1
value: 63.4981
- type: nauc_map_at_100_max
value: 57.8164
- type: nauc_map_at_100_std
value: 10.2083
- type: nauc_map_at_100_diff1
value: 63.524
- type: nauc_map_at_1000_max
value: 57.80610000000001
- type: nauc_map_at_1000_std
value: 10.1882
- type: nauc_map_at_1000_diff1
value: 63.521499999999996
- type: nauc_recall_at_1_max
value: 53.0712
- type: nauc_recall_at_1_std
value: 3.4898
- type: nauc_recall_at_1_diff1
value: 66.2941
- type: nauc_recall_at_3_max
value: 65.6965
- type: nauc_recall_at_3_std
value: 20.741100000000003
- type: nauc_recall_at_3_diff1
value: 59.885600000000004
- type: nauc_recall_at_5_max
value: 65.05539999999999
- type: nauc_recall_at_5_std
value: 22.2359
- type: nauc_recall_at_5_diff1
value: 52.3555
- type: nauc_recall_at_10_max
value: 69.0771
- type: nauc_recall_at_10_std
value: 43.1849
- type: nauc_recall_at_10_diff1
value: 55.924099999999996
- type: nauc_recall_at_20_max
value: 73.63589999999999
- type: nauc_recall_at_20_std
value: 40.5013
- type: nauc_recall_at_20_diff1
value: 62.9617
- type: nauc_recall_at_100_max
value: 93.44839999999999
- type: nauc_recall_at_100_std
value: 79.5537
- type: nauc_recall_at_100_diff1
value: 72.2107
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 53.0712
- type: nauc_precision_at_1_std
value: 3.4898
- type: nauc_precision_at_1_diff1
value: 66.2941
- type: nauc_precision_at_3_max
value: 65.6965
- type: nauc_precision_at_3_std
value: 20.741100000000003
- type: nauc_precision_at_3_diff1
value: 59.885600000000004
- type: nauc_precision_at_5_max
value: 65.05539999999999
- type: nauc_precision_at_5_std
value: 22.2359
- type: nauc_precision_at_5_diff1
value: 52.3555
- type: nauc_precision_at_10_max
value: 69.0771
- type: nauc_precision_at_10_std
value: 43.1849
- type: nauc_precision_at_10_diff1
value: 55.924099999999996
- type: nauc_precision_at_20_max
value: 73.63589999999999
- type: nauc_precision_at_20_std
value: 40.5013
- type: nauc_precision_at_20_diff1
value: 62.9617
- type: nauc_precision_at_100_max
value: 93.44839999999999
- type: nauc_precision_at_100_std
value: 79.5537
- type: nauc_precision_at_100_diff1
value: 72.2107
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 53.0712
- type: nauc_mrr_at_1_std
value: 3.4898
- type: nauc_mrr_at_1_diff1
value: 66.2941
- type: nauc_mrr_at_3_max
value: 58.0839
- type: nauc_mrr_at_3_std
value: 9.8015
- type: nauc_mrr_at_3_diff1
value: 63.7764
- type: nauc_mrr_at_5_max
value: 57.7643
- type: nauc_mrr_at_5_std
value: 9.661200000000001
- type: nauc_mrr_at_5_diff1
value: 62.8703
- type: nauc_mrr_at_10_max
value: 57.92230000000001
- type: nauc_mrr_at_10_std
value: 10.7513
- type: nauc_mrr_at_10_diff1
value: 63.282700000000006
- type: nauc_mrr_at_20_max
value: 57.898
- type: nauc_mrr_at_20_std
value: 10.3559
- type: nauc_mrr_at_20_diff1
value: 63.4981
- type: nauc_mrr_at_100_max
value: 57.8164
- type: nauc_mrr_at_100_std
value: 10.2083
- type: nauc_mrr_at_100_diff1
value: 63.524
- type: nauc_mrr_at_1000_max
value: 57.80610000000001
- type: nauc_mrr_at_1000_std
value: 10.1882
- type: nauc_mrr_at_1000_diff1
value: 63.521499999999996
- type: main_score
value: 70.511
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-vie)
type: facebook/mlqa
config: ara-vie
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 52.147
- type: ndcg_at_3
value: 60.407
- type: ndcg_at_5
value: 64.209
- type: ndcg_at_10
value: 66.841
- type: ndcg_at_20
value: 68.27
- type: ndcg_at_100
value: 70.407
- type: ndcg_at_1000
value: 70.407
- type: map_at_1
value: 52.147
- type: map_at_3
value: 58.384
- type: map_at_5
value: 60.501000000000005
- type: map_at_10
value: 61.617
- type: map_at_20
value: 62.026
- type: map_at_100
value: 62.356
- type: map_at_1000
value: 62.356
- type: recall_at_1
value: 52.147
- type: recall_at_3
value: 66.258
- type: recall_at_5
value: 75.46000000000001
- type: recall_at_10
value: 83.43599999999999
- type: recall_at_20
value: 88.957
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 52.147
- type: precision_at_3
value: 22.086
- type: precision_at_5
value: 15.092
- type: precision_at_10
value: 8.344
- type: precision_at_20
value: 4.4479999999999995
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 52.147200000000005
- type: mrr_at_3
value: 58.384499999999996
- type: mrr_at_5
value: 60.501000000000005
- type: mrr_at_10
value: 61.616499999999995
- type: mrr_at_20
value: 62.02609999999999
- type: mrr_at_100
value: 62.3563
- type: mrr_at_1000
value: 62.3563
- type: nauc_ndcg_at_1_max
value: 62.013
- type: nauc_ndcg_at_1_std
value: 14.3347
- type: nauc_ndcg_at_1_diff1
value: 63.092000000000006
- type: nauc_ndcg_at_3_max
value: 64.3437
- type: nauc_ndcg_at_3_std
value: 17.8683
- type: nauc_ndcg_at_3_diff1
value: 58.916999999999994
- type: nauc_ndcg_at_5_max
value: 62.3664
- type: nauc_ndcg_at_5_std
value: 17.697
- type: nauc_ndcg_at_5_diff1
value: 57.1928
- type: nauc_ndcg_at_10_max
value: 62.8166
- type: nauc_ndcg_at_10_std
value: 19.034599999999998
- type: nauc_ndcg_at_10_diff1
value: 58.5172
- type: nauc_ndcg_at_20_max
value: 63.6594
- type: nauc_ndcg_at_20_std
value: 20.9389
- type: nauc_ndcg_at_20_diff1
value: 57.687900000000006
- type: nauc_ndcg_at_100_max
value: 63.109700000000004
- type: nauc_ndcg_at_100_std
value: 18.536
- type: nauc_ndcg_at_100_diff1
value: 58.574099999999994
- type: nauc_ndcg_at_1000_max
value: 63.109700000000004
- type: nauc_ndcg_at_1000_std
value: 18.536
- type: nauc_ndcg_at_1000_diff1
value: 58.574099999999994
- type: nauc_map_at_1_max
value: 62.013
- type: nauc_map_at_1_std
value: 14.3347
- type: nauc_map_at_1_diff1
value: 63.092000000000006
- type: nauc_map_at_3_max
value: 63.7613
- type: nauc_map_at_3_std
value: 17.387800000000002
- type: nauc_map_at_3_diff1
value: 59.5963
- type: nauc_map_at_5_max
value: 62.6696
- type: nauc_map_at_5_std
value: 17.2029
- type: nauc_map_at_5_diff1
value: 58.5964
- type: nauc_map_at_10_max
value: 62.7803
- type: nauc_map_at_10_std
value: 17.6424
- type: nauc_map_at_10_diff1
value: 59.108799999999995
- type: nauc_map_at_20_max
value: 63.032
- type: nauc_map_at_20_std
value: 18.2008
- type: nauc_map_at_20_diff1
value: 58.8951
- type: nauc_map_at_100_max
value: 62.961800000000004
- type: nauc_map_at_100_std
value: 17.8419
- type: nauc_map_at_100_diff1
value: 59.0283
- type: nauc_map_at_1000_max
value: 62.961800000000004
- type: nauc_map_at_1000_std
value: 17.8419
- type: nauc_map_at_1000_diff1
value: 59.0283
- type: nauc_recall_at_1_max
value: 62.013
- type: nauc_recall_at_1_std
value: 14.3347
- type: nauc_recall_at_1_diff1
value: 63.092000000000006
- type: nauc_recall_at_3_max
value: 66.2268
- type: nauc_recall_at_3_std
value: 19.2254
- type: nauc_recall_at_3_diff1
value: 56.8986
- type: nauc_recall_at_5_max
value: 60.8216
- type: nauc_recall_at_5_std
value: 19.4877
- type: nauc_recall_at_5_diff1
value: 51.761900000000004
- type: nauc_recall_at_10_max
value: 63.136199999999995
- type: nauc_recall_at_10_std
value: 27.4165
- type: nauc_recall_at_10_diff1
value: 56.558
- type: nauc_recall_at_20_max
value: 69.8169
- type: nauc_recall_at_20_std
value: 45.7693
- type: nauc_recall_at_20_diff1
value: 48.7296
- type: nauc_recall_at_100_max
value: .nan
- type: nauc_recall_at_100_std
value: .nan
- type: nauc_recall_at_100_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 62.013
- type: nauc_precision_at_1_std
value: 14.3347
- type: nauc_precision_at_1_diff1
value: 63.092000000000006
- type: nauc_precision_at_3_max
value: 66.2268
- type: nauc_precision_at_3_std
value: 19.2254
- type: nauc_precision_at_3_diff1
value: 56.8986
- type: nauc_precision_at_5_max
value: 60.8216
- type: nauc_precision_at_5_std
value: 19.4877
- type: nauc_precision_at_5_diff1
value: 51.761900000000004
- type: nauc_precision_at_10_max
value: 63.136199999999995
- type: nauc_precision_at_10_std
value: 27.4165
- type: nauc_precision_at_10_diff1
value: 56.558
- type: nauc_precision_at_20_max
value: 69.8169
- type: nauc_precision_at_20_std
value: 45.7693
- type: nauc_precision_at_20_diff1
value: 48.7296
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 100.0
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 62.013
- type: nauc_mrr_at_1_std
value: 14.3347
- type: nauc_mrr_at_1_diff1
value: 63.092000000000006
- type: nauc_mrr_at_3_max
value: 63.7613
- type: nauc_mrr_at_3_std
value: 17.387800000000002
- type: nauc_mrr_at_3_diff1
value: 59.5963
- type: nauc_mrr_at_5_max
value: 62.6696
- type: nauc_mrr_at_5_std
value: 17.2029
- type: nauc_mrr_at_5_diff1
value: 58.5964
- type: nauc_mrr_at_10_max
value: 62.7803
- type: nauc_mrr_at_10_std
value: 17.6424
- type: nauc_mrr_at_10_diff1
value: 59.108799999999995
- type: nauc_mrr_at_20_max
value: 63.032
- type: nauc_mrr_at_20_std
value: 18.2008
- type: nauc_mrr_at_20_diff1
value: 58.8951
- type: nauc_mrr_at_100_max
value: 62.961800000000004
- type: nauc_mrr_at_100_std
value: 17.8419
- type: nauc_mrr_at_100_diff1
value: 59.0283
- type: nauc_mrr_at_1000_max
value: 62.961800000000004
- type: nauc_mrr_at_1000_std
value: 17.8419
- type: nauc_mrr_at_1000_diff1
value: 59.0283
- type: main_score
value: 66.841
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-zho)
type: facebook/mlqa
config: ara-zho
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 58.511
- type: ndcg_at_3
value: 68.022
- type: ndcg_at_5
value: 69.808
- type: ndcg_at_10
value: 71.552
- type: ndcg_at_20
value: 73.287
- type: ndcg_at_100
value: 74.737
- type: ndcg_at_1000
value: 74.964
- type: map_at_1
value: 58.511
- type: map_at_3
value: 65.78
- type: map_at_5
value: 66.791
- type: map_at_10
value: 67.523
- type: map_at_20
value: 67.994
- type: map_at_100
value: 68.219
- type: map_at_1000
value: 68.231
- type: recall_at_1
value: 58.511
- type: recall_at_3
value: 74.468
- type: recall_at_5
value: 78.723
- type: recall_at_10
value: 84.043
- type: recall_at_20
value: 90.957
- type: recall_at_100
value: 98.404
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 58.511
- type: precision_at_3
value: 24.823
- type: precision_at_5
value: 15.745000000000001
- type: precision_at_10
value: 8.404
- type: precision_at_20
value: 4.548
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 58.510600000000004
- type: mrr_at_3
value: 65.78009999999999
- type: mrr_at_5
value: 66.79079999999999
- type: mrr_at_10
value: 67.5232
- type: mrr_at_20
value: 67.994
- type: mrr_at_100
value: 68.2188
- type: mrr_at_1000
value: 68.2311
- type: nauc_ndcg_at_1_max
value: 47.2503
- type: nauc_ndcg_at_1_std
value: 14.4989
- type: nauc_ndcg_at_1_diff1
value: 63.2463
- type: nauc_ndcg_at_3_max
value: 54.855900000000005
- type: nauc_ndcg_at_3_std
value: 21.204700000000003
- type: nauc_ndcg_at_3_diff1
value: 60.0863
- type: nauc_ndcg_at_5_max
value: 55.416399999999996
- type: nauc_ndcg_at_5_std
value: 22.047900000000002
- type: nauc_ndcg_at_5_diff1
value: 61.1254
- type: nauc_ndcg_at_10_max
value: 53.0238
- type: nauc_ndcg_at_10_std
value: 19.6632
- type: nauc_ndcg_at_10_diff1
value: 60.5071
- type: nauc_ndcg_at_20_max
value: 53.337599999999995
- type: nauc_ndcg_at_20_std
value: 21.4431
- type: nauc_ndcg_at_20_diff1
value: 59.5753
- type: nauc_ndcg_at_100_max
value: 52.819300000000005
- type: nauc_ndcg_at_100_std
value: 20.0427
- type: nauc_ndcg_at_100_diff1
value: 60.933800000000005
- type: nauc_ndcg_at_1000_max
value: 52.70399999999999
- type: nauc_ndcg_at_1000_std
value: 19.5895
- type: nauc_ndcg_at_1000_diff1
value: 60.8733
- type: nauc_map_at_1_max
value: 47.2503
- type: nauc_map_at_1_std
value: 14.4989
- type: nauc_map_at_1_diff1
value: 63.2463
- type: nauc_map_at_3_max
value: 52.973400000000005
- type: nauc_map_at_3_std
value: 19.3872
- type: nauc_map_at_3_diff1
value: 60.8399
- type: nauc_map_at_5_max
value: 53.166999999999994
- type: nauc_map_at_5_std
value: 19.7018
- type: nauc_map_at_5_diff1
value: 61.3792
- type: nauc_map_at_10_max
value: 52.2108
- type: nauc_map_at_10_std
value: 18.693199999999997
- type: nauc_map_at_10_diff1
value: 61.15390000000001
- type: nauc_map_at_20_max
value: 52.2363
- type: nauc_map_at_20_std
value: 19.135099999999998
- type: nauc_map_at_20_diff1
value: 60.963
- type: nauc_map_at_100_max
value: 52.16499999999999
- type: nauc_map_at_100_std
value: 18.8758
- type: nauc_map_at_100_diff1
value: 61.1737
- type: nauc_map_at_1000_max
value: 52.1605
- type: nauc_map_at_1000_std
value: 18.8562
- type: nauc_map_at_1000_diff1
value: 61.1715
- type: nauc_recall_at_1_max
value: 47.2503
- type: nauc_recall_at_1_std
value: 14.4989
- type: nauc_recall_at_1_diff1
value: 63.2463
- type: nauc_recall_at_3_max
value: 61.4028
- type: nauc_recall_at_3_std
value: 27.6147
- type: nauc_recall_at_3_diff1
value: 57.4815
- type: nauc_recall_at_5_max
value: 64.4332
- type: nauc_recall_at_5_std
value: 31.658399999999997
- type: nauc_recall_at_5_diff1
value: 60.4164
- type: nauc_recall_at_10_max
value: 55.680099999999996
- type: nauc_recall_at_10_std
value: 23.6144
- type: nauc_recall_at_10_diff1
value: 57.232099999999996
- type: nauc_recall_at_20_max
value: 61.303700000000006
- type: nauc_recall_at_20_std
value: 42.750899999999994
- type: nauc_recall_at_20_diff1
value: 45.5658
- type: nauc_recall_at_100_max
value: 63.750099999999996
- type: nauc_recall_at_100_std
value: 61.4922
- type: nauc_recall_at_100_diff1
value: 66.5823
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 47.2503
- type: nauc_precision_at_1_std
value: 14.4989
- type: nauc_precision_at_1_diff1
value: 63.2463
- type: nauc_precision_at_3_max
value: 61.4028
- type: nauc_precision_at_3_std
value: 27.6147
- type: nauc_precision_at_3_diff1
value: 57.4815
- type: nauc_precision_at_5_max
value: 64.4332
- type: nauc_precision_at_5_std
value: 31.658399999999997
- type: nauc_precision_at_5_diff1
value: 60.4164
- type: nauc_precision_at_10_max
value: 55.680099999999996
- type: nauc_precision_at_10_std
value: 23.6144
- type: nauc_precision_at_10_diff1
value: 57.232099999999996
- type: nauc_precision_at_20_max
value: 61.303700000000006
- type: nauc_precision_at_20_std
value: 42.750899999999994
- type: nauc_precision_at_20_diff1
value: 45.5658
- type: nauc_precision_at_100_max
value: 63.750099999999996
- type: nauc_precision_at_100_std
value: 61.4922
- type: nauc_precision_at_100_diff1
value: 66.5823
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 47.2503
- type: nauc_mrr_at_1_std
value: 14.4989
- type: nauc_mrr_at_1_diff1
value: 63.2463
- type: nauc_mrr_at_3_max
value: 52.973400000000005
- type: nauc_mrr_at_3_std
value: 19.3872
- type: nauc_mrr_at_3_diff1
value: 60.8399
- type: nauc_mrr_at_5_max
value: 53.166999999999994
- type: nauc_mrr_at_5_std
value: 19.7018
- type: nauc_mrr_at_5_diff1
value: 61.3792
- type: nauc_mrr_at_10_max
value: 52.2108
- type: nauc_mrr_at_10_std
value: 18.693199999999997
- type: nauc_mrr_at_10_diff1
value: 61.15390000000001
- type: nauc_mrr_at_20_max
value: 52.2363
- type: nauc_mrr_at_20_std
value: 19.135099999999998
- type: nauc_mrr_at_20_diff1
value: 60.963
- type: nauc_mrr_at_100_max
value: 52.16499999999999
- type: nauc_mrr_at_100_std
value: 18.8758
- type: nauc_mrr_at_100_diff1
value: 61.1737
- type: nauc_mrr_at_1000_max
value: 52.1605
- type: nauc_mrr_at_1000_std
value: 18.8562
- type: nauc_mrr_at_1000_diff1
value: 61.1715
- type: main_score
value: 71.552
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-ara)
type: facebook/mlqa
config: deu-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 48.792
- type: ndcg_at_3
value: 58.879000000000005
- type: ndcg_at_5
value: 62.039
- type: ndcg_at_10
value: 64.575
- type: ndcg_at_20
value: 66.373
- type: ndcg_at_100
value: 68.355
- type: ndcg_at_1000
value: 68.423
- type: map_at_1
value: 48.792
- type: map_at_3
value: 56.361000000000004
- type: map_at_5
value: 58.099999999999994
- type: map_at_10
value: 59.168
- type: map_at_20
value: 59.643
- type: map_at_100
value: 59.924
- type: map_at_1000
value: 59.927
- type: recall_at_1
value: 48.792
- type: recall_at_3
value: 66.184
- type: recall_at_5
value: 73.913
- type: recall_at_10
value: 81.643
- type: recall_at_20
value: 88.889
- type: recall_at_100
value: 99.517
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 48.792
- type: precision_at_3
value: 22.061
- type: precision_at_5
value: 14.783
- type: precision_at_10
value: 8.164
- type: precision_at_20
value: 4.444
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 48.7923
- type: mrr_at_3
value: 56.360699999999994
- type: mrr_at_5
value: 58.0998
- type: mrr_at_10
value: 59.1684
- type: mrr_at_20
value: 59.6429
- type: mrr_at_100
value: 59.923899999999996
- type: mrr_at_1000
value: 59.927299999999995
- type: nauc_ndcg_at_1_max
value: 60.14659999999999
- type: nauc_ndcg_at_1_std
value: 24.918000000000003
- type: nauc_ndcg_at_1_diff1
value: 68.1555
- type: nauc_ndcg_at_3_max
value: 68.1987
- type: nauc_ndcg_at_3_std
value: 33.2158
- type: nauc_ndcg_at_3_diff1
value: 65.9628
- type: nauc_ndcg_at_5_max
value: 67.9623
- type: nauc_ndcg_at_5_std
value: 35.7052
- type: nauc_ndcg_at_5_diff1
value: 65.3555
- type: nauc_ndcg_at_10_max
value: 67.2588
- type: nauc_ndcg_at_10_std
value: 35.5972
- type: nauc_ndcg_at_10_diff1
value: 64.43560000000001
- type: nauc_ndcg_at_20_max
value: 66.4426
- type: nauc_ndcg_at_20_std
value: 34.2402
- type: nauc_ndcg_at_20_diff1
value: 64.6256
- type: nauc_ndcg_at_100_max
value: 65.9374
- type: nauc_ndcg_at_100_std
value: 33.2936
- type: nauc_ndcg_at_100_diff1
value: 65.4946
- type: nauc_ndcg_at_1000_max
value: 65.8403
- type: nauc_ndcg_at_1000_std
value: 33.1036
- type: nauc_ndcg_at_1000_diff1
value: 65.4336
- type: nauc_map_at_1_max
value: 60.14659999999999
- type: nauc_map_at_1_std
value: 24.918000000000003
- type: nauc_map_at_1_diff1
value: 68.1555
- type: nauc_map_at_3_max
value: 65.9154
- type: nauc_map_at_3_std
value: 31.2376
- type: nauc_map_at_3_diff1
value: 66.2823
- type: nauc_map_at_5_max
value: 65.6741
- type: nauc_map_at_5_std
value: 32.3493
- type: nauc_map_at_5_diff1
value: 65.985
- type: nauc_map_at_10_max
value: 65.32430000000001
- type: nauc_map_at_10_std
value: 32.1969
- type: nauc_map_at_10_diff1
value: 65.6151
- type: nauc_map_at_20_max
value: 65.11710000000001
- type: nauc_map_at_20_std
value: 31.842599999999997
- type: nauc_map_at_20_diff1
value: 65.6874
- type: nauc_map_at_100_max
value: 65.0633
- type: nauc_map_at_100_std
value: 31.7911
- type: nauc_map_at_100_diff1
value: 65.803
- type: nauc_map_at_1000_max
value: 65.0593
- type: nauc_map_at_1000_std
value: 31.7832
- type: nauc_map_at_1000_diff1
value: 65.8006
- type: nauc_recall_at_1_max
value: 60.14659999999999
- type: nauc_recall_at_1_std
value: 24.918000000000003
- type: nauc_recall_at_1_diff1
value: 68.1555
- type: nauc_recall_at_3_max
value: 75.8576
- type: nauc_recall_at_3_std
value: 39.685900000000004
- type: nauc_recall_at_3_diff1
value: 65.02459999999999
- type: nauc_recall_at_5_max
value: 76.9843
- type: nauc_recall_at_5_std
value: 49.3317
- type: nauc_recall_at_5_diff1
value: 62.922599999999996
- type: nauc_recall_at_10_max
value: 76.8501
- type: nauc_recall_at_10_std
value: 53.6033
- type: nauc_recall_at_10_diff1
value: 58.028999999999996
- type: nauc_recall_at_20_max
value: 74.5552
- type: nauc_recall_at_20_std
value: 51.1048
- type: nauc_recall_at_20_diff1
value: 55.864000000000004
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 86.907
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 60.14659999999999
- type: nauc_precision_at_1_std
value: 24.918000000000003
- type: nauc_precision_at_1_diff1
value: 68.1555
- type: nauc_precision_at_3_max
value: 75.8576
- type: nauc_precision_at_3_std
value: 39.685900000000004
- type: nauc_precision_at_3_diff1
value: 65.02459999999999
- type: nauc_precision_at_5_max
value: 76.9843
- type: nauc_precision_at_5_std
value: 49.3317
- type: nauc_precision_at_5_diff1
value: 62.922599999999996
- type: nauc_precision_at_10_max
value: 76.8501
- type: nauc_precision_at_10_std
value: 53.6033
- type: nauc_precision_at_10_diff1
value: 58.028999999999996
- type: nauc_precision_at_20_max
value: 74.5552
- type: nauc_precision_at_20_std
value: 51.1048
- type: nauc_precision_at_20_diff1
value: 55.864000000000004
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 86.907
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 60.14659999999999
- type: nauc_mrr_at_1_std
value: 24.918000000000003
- type: nauc_mrr_at_1_diff1
value: 68.1555
- type: nauc_mrr_at_3_max
value: 65.9154
- type: nauc_mrr_at_3_std
value: 31.2376
- type: nauc_mrr_at_3_diff1
value: 66.2823
- type: nauc_mrr_at_5_max
value: 65.6741
- type: nauc_mrr_at_5_std
value: 32.3493
- type: nauc_mrr_at_5_diff1
value: 65.985
- type: nauc_mrr_at_10_max
value: 65.32430000000001
- type: nauc_mrr_at_10_std
value: 32.1969
- type: nauc_mrr_at_10_diff1
value: 65.6151
- type: nauc_mrr_at_20_max
value: 65.11710000000001
- type: nauc_mrr_at_20_std
value: 31.842599999999997
- type: nauc_mrr_at_20_diff1
value: 65.6874
- type: nauc_mrr_at_100_max
value: 65.0633
- type: nauc_mrr_at_100_std
value: 31.7911
- type: nauc_mrr_at_100_diff1
value: 65.803
- type: nauc_mrr_at_1000_max
value: 65.0593
- type: nauc_mrr_at_1000_std
value: 31.7832
- type: nauc_mrr_at_1000_diff1
value: 65.8006
- type: main_score
value: 64.575
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-ara)
type: facebook/mlqa
config: eng-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 44.101
- type: ndcg_at_3
value: 53.613
- type: ndcg_at_5
value: 57.083
- type: ndcg_at_10
value: 59.467000000000006
- type: ndcg_at_20
value: 61.085
- type: ndcg_at_100
value: 62.991
- type: ndcg_at_1000
value: 63.837999999999994
- type: map_at_1
value: 44.101
- type: map_at_3
value: 51.225
- type: map_at_5
value: 53.13
- type: map_at_10
value: 54.081
- type: map_at_20
value: 54.529
- type: map_at_100
value: 54.771
- type: map_at_1000
value: 54.806999999999995
- type: recall_at_1
value: 44.101
- type: recall_at_3
value: 60.541999999999994
- type: recall_at_5
value: 69.052
- type: recall_at_10
value: 76.596
- type: recall_at_20
value: 82.979
- type: recall_at_100
value: 93.61699999999999
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 44.101
- type: precision_at_3
value: 20.180999999999997
- type: precision_at_5
value: 13.81
- type: precision_at_10
value: 7.66
- type: precision_at_20
value: 4.149
- type: precision_at_100
value: 0.936
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 44.1006
- type: mrr_at_3
value: 51.225
- type: mrr_at_5
value: 53.1302
- type: mrr_at_10
value: 54.0814
- type: mrr_at_20
value: 54.5288
- type: mrr_at_100
value: 54.770799999999994
- type: mrr_at_1000
value: 54.8066
- type: nauc_ndcg_at_1_max
value: 55.80310000000001
- type: nauc_ndcg_at_1_std
value: 22.0275
- type: nauc_ndcg_at_1_diff1
value: 56.5222
- type: nauc_ndcg_at_3_max
value: 54.8699
- type: nauc_ndcg_at_3_std
value: 25.883699999999997
- type: nauc_ndcg_at_3_diff1
value: 49.195699999999995
- type: nauc_ndcg_at_5_max
value: 56.272299999999994
- type: nauc_ndcg_at_5_std
value: 28.6933
- type: nauc_ndcg_at_5_diff1
value: 49.4566
- type: nauc_ndcg_at_10_max
value: 55.6011
- type: nauc_ndcg_at_10_std
value: 27.5248
- type: nauc_ndcg_at_10_diff1
value: 48.7372
- type: nauc_ndcg_at_20_max
value: 55.49230000000001
- type: nauc_ndcg_at_20_std
value: 26.862599999999997
- type: nauc_ndcg_at_20_diff1
value: 49.382799999999996
- type: nauc_ndcg_at_100_max
value: 55.7909
- type: nauc_ndcg_at_100_std
value: 27.314100000000003
- type: nauc_ndcg_at_100_diff1
value: 50.6826
- type: nauc_ndcg_at_1000_max
value: 55.614200000000004
- type: nauc_ndcg_at_1000_std
value: 26.6721
- type: nauc_ndcg_at_1000_diff1
value: 50.67660000000001
- type: nauc_map_at_1_max
value: 55.80310000000001
- type: nauc_map_at_1_std
value: 22.0275
- type: nauc_map_at_1_diff1
value: 56.5222
- type: nauc_map_at_3_max
value: 54.9107
- type: nauc_map_at_3_std
value: 24.803
- type: nauc_map_at_3_diff1
value: 51.0794
- type: nauc_map_at_5_max
value: 55.702600000000004
- type: nauc_map_at_5_std
value: 26.3248
- type: nauc_map_at_5_diff1
value: 51.3243
- type: nauc_map_at_10_max
value: 55.4072
- type: nauc_map_at_10_std
value: 25.8517
- type: nauc_map_at_10_diff1
value: 51.073100000000004
- type: nauc_map_at_20_max
value: 55.4075
- type: nauc_map_at_20_std
value: 25.684600000000003
- type: nauc_map_at_20_diff1
value: 51.2544
- type: nauc_map_at_100_max
value: 55.4738
- type: nauc_map_at_100_std
value: 25.7963
- type: nauc_map_at_100_diff1
value: 51.4555
- type: nauc_map_at_1000_max
value: 55.4642
- type: nauc_map_at_1000_std
value: 25.7658
- type: nauc_map_at_1000_diff1
value: 51.4559
- type: nauc_recall_at_1_max
value: 55.80310000000001
- type: nauc_recall_at_1_std
value: 22.0275
- type: nauc_recall_at_1_diff1
value: 56.5222
- type: nauc_recall_at_3_max
value: 54.8305
- type: nauc_recall_at_3_std
value: 29.317999999999998
- type: nauc_recall_at_3_diff1
value: 43.279
- type: nauc_recall_at_5_max
value: 58.5943
- type: nauc_recall_at_5_std
value: 37.6264
- type: nauc_recall_at_5_diff1
value: 42.7338
- type: nauc_recall_at_10_max
value: 56.5176
- type: nauc_recall_at_10_std
value: 34.6487
- type: nauc_recall_at_10_diff1
value: 38.0783
- type: nauc_recall_at_20_max
value: 55.6135
- type: nauc_recall_at_20_std
value: 32.082100000000004
- type: nauc_recall_at_20_diff1
value: 39.259100000000004
- type: nauc_recall_at_100_max
value: 60.3625
- type: nauc_recall_at_100_std
value: 45.4796
- type: nauc_recall_at_100_diff1
value: 50.6829
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 55.80310000000001
- type: nauc_precision_at_1_std
value: 22.0275
- type: nauc_precision_at_1_diff1
value: 56.5222
- type: nauc_precision_at_3_max
value: 54.8305
- type: nauc_precision_at_3_std
value: 29.317999999999998
- type: nauc_precision_at_3_diff1
value: 43.279
- type: nauc_precision_at_5_max
value: 58.5943
- type: nauc_precision_at_5_std
value: 37.6264
- type: nauc_precision_at_5_diff1
value: 42.7338
- type: nauc_precision_at_10_max
value: 56.5176
- type: nauc_precision_at_10_std
value: 34.6487
- type: nauc_precision_at_10_diff1
value: 38.0783
- type: nauc_precision_at_20_max
value: 55.6135
- type: nauc_precision_at_20_std
value: 32.082100000000004
- type: nauc_precision_at_20_diff1
value: 39.259100000000004
- type: nauc_precision_at_100_max
value: 60.3625
- type: nauc_precision_at_100_std
value: 45.4796
- type: nauc_precision_at_100_diff1
value: 50.6829
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 55.80310000000001
- type: nauc_mrr_at_1_std
value: 22.0275
- type: nauc_mrr_at_1_diff1
value: 56.5222
- type: nauc_mrr_at_3_max
value: 54.9107
- type: nauc_mrr_at_3_std
value: 24.803
- type: nauc_mrr_at_3_diff1
value: 51.0794
- type: nauc_mrr_at_5_max
value: 55.702600000000004
- type: nauc_mrr_at_5_std
value: 26.3248
- type: nauc_mrr_at_5_diff1
value: 51.3243
- type: nauc_mrr_at_10_max
value: 55.4072
- type: nauc_mrr_at_10_std
value: 25.8517
- type: nauc_mrr_at_10_diff1
value: 51.073100000000004
- type: nauc_mrr_at_20_max
value: 55.4075
- type: nauc_mrr_at_20_std
value: 25.684600000000003
- type: nauc_mrr_at_20_diff1
value: 51.2544
- type: nauc_mrr_at_100_max
value: 55.4738
- type: nauc_mrr_at_100_std
value: 25.7963
- type: nauc_mrr_at_100_diff1
value: 51.4555
- type: nauc_mrr_at_1000_max
value: 55.4642
- type: nauc_mrr_at_1000_std
value: 25.7658
- type: nauc_mrr_at_1000_diff1
value: 51.4559
- type: main_score
value: 59.467000000000006
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-ara)
type: facebook/mlqa
config: spa-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 52.795
- type: ndcg_at_3
value: 64.507
- type: ndcg_at_5
value: 67.581
- type: ndcg_at_10
value: 70.32300000000001
- type: ndcg_at_20
value: 70.475
- type: ndcg_at_100
value: 72.195
- type: ndcg_at_1000
value: 72.286
- type: map_at_1
value: 52.795
- type: map_at_3
value: 61.49099999999999
- type: map_at_5
value: 63.199000000000005
- type: map_at_10
value: 64.29
- type: map_at_20
value: 64.328
- type: map_at_100
value: 64.564
- type: map_at_1000
value: 64.57000000000001
- type: recall_at_1
value: 52.795
- type: recall_at_3
value: 73.292
- type: recall_at_5
value: 80.745
- type: recall_at_10
value: 89.441
- type: recall_at_20
value: 90.062
- type: recall_at_100
value: 99.37899999999999
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 52.795
- type: precision_at_3
value: 24.431
- type: precision_at_5
value: 16.149
- type: precision_at_10
value: 8.944
- type: precision_at_20
value: 4.503
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 52.795
- type: mrr_at_3
value: 61.4907
- type: mrr_at_5
value: 63.1988
- type: mrr_at_10
value: 64.28970000000001
- type: mrr_at_20
value: 64.3285
- type: mrr_at_100
value: 64.5641
- type: mrr_at_1000
value: 64.5697
- type: nauc_ndcg_at_1_max
value: 53.888999999999996
- type: nauc_ndcg_at_1_std
value: 11.0525
- type: nauc_ndcg_at_1_diff1
value: 74.8286
- type: nauc_ndcg_at_3_max
value: 59.9321
- type: nauc_ndcg_at_3_std
value: 21.096899999999998
- type: nauc_ndcg_at_3_diff1
value: 69.4211
- type: nauc_ndcg_at_5_max
value: 61.1135
- type: nauc_ndcg_at_5_std
value: 21.885199999999998
- type: nauc_ndcg_at_5_diff1
value: 69.2178
- type: nauc_ndcg_at_10_max
value: 61.0899
- type: nauc_ndcg_at_10_std
value: 23.1179
- type: nauc_ndcg_at_10_diff1
value: 69.1936
- type: nauc_ndcg_at_20_max
value: 60.7846
- type: nauc_ndcg_at_20_std
value: 22.5977
- type: nauc_ndcg_at_20_diff1
value: 69.1149
- type: nauc_ndcg_at_100_max
value: 59.8011
- type: nauc_ndcg_at_100_std
value: 20.5927
- type: nauc_ndcg_at_100_diff1
value: 70.11319999999999
- type: nauc_ndcg_at_1000_max
value: 59.630799999999994
- type: nauc_ndcg_at_1000_std
value: 20.2562
- type: nauc_ndcg_at_1000_diff1
value: 70.357
- type: nauc_map_at_1_max
value: 53.888999999999996
- type: nauc_map_at_1_std
value: 11.0525
- type: nauc_map_at_1_diff1
value: 74.8286
- type: nauc_map_at_3_max
value: 58.2855
- type: nauc_map_at_3_std
value: 18.0442
- type: nauc_map_at_3_diff1
value: 70.7787
- type: nauc_map_at_5_max
value: 58.875299999999996
- type: nauc_map_at_5_std
value: 18.276999999999997
- type: nauc_map_at_5_diff1
value: 70.7961
- type: nauc_map_at_10_max
value: 58.7896
- type: nauc_map_at_10_std
value: 18.697
- type: nauc_map_at_10_diff1
value: 70.759
- type: nauc_map_at_20_max
value: 58.7205
- type: nauc_map_at_20_std
value: 18.5786
- type: nauc_map_at_20_diff1
value: 70.74380000000001
- type: nauc_map_at_100_max
value: 58.64319999999999
- type: nauc_map_at_100_std
value: 18.418799999999997
- type: nauc_map_at_100_diff1
value: 70.9314
- type: nauc_map_at_1000_max
value: 58.634699999999995
- type: nauc_map_at_1000_std
value: 18.401999999999997
- type: nauc_map_at_1000_diff1
value: 70.9434
- type: nauc_recall_at_1_max
value: 53.888999999999996
- type: nauc_recall_at_1_std
value: 11.0525
- type: nauc_recall_at_1_diff1
value: 74.8286
- type: nauc_recall_at_3_max
value: 65.92
- type: nauc_recall_at_3_std
value: 32.3637
- type: nauc_recall_at_3_diff1
value: 64.5457
- type: nauc_recall_at_5_max
value: 71.4171
- type: nauc_recall_at_5_std
value: 38.7281
- type: nauc_recall_at_5_diff1
value: 61.96430000000001
- type: nauc_recall_at_10_max
value: 78.67739999999999
- type: nauc_recall_at_10_std
value: 57.8693
- type: nauc_recall_at_10_diff1
value: 57.7189
- type: nauc_recall_at_20_max
value: 76.7024
- type: nauc_recall_at_20_std
value: 54.76370000000001
- type: nauc_recall_at_20_diff1
value: 56.3392
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 12.5808
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 53.888999999999996
- type: nauc_precision_at_1_std
value: 11.0525
- type: nauc_precision_at_1_diff1
value: 74.8286
- type: nauc_precision_at_3_max
value: 65.92
- type: nauc_precision_at_3_std
value: 32.3637
- type: nauc_precision_at_3_diff1
value: 64.5457
- type: nauc_precision_at_5_max
value: 71.4171
- type: nauc_precision_at_5_std
value: 38.7281
- type: nauc_precision_at_5_diff1
value: 61.96430000000001
- type: nauc_precision_at_10_max
value: 78.67739999999999
- type: nauc_precision_at_10_std
value: 57.8693
- type: nauc_precision_at_10_diff1
value: 57.7189
- type: nauc_precision_at_20_max
value: 76.7024
- type: nauc_precision_at_20_std
value: 54.76370000000001
- type: nauc_precision_at_20_diff1
value: 56.3392
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 12.5808
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 53.888999999999996
- type: nauc_mrr_at_1_std
value: 11.0525
- type: nauc_mrr_at_1_diff1
value: 74.8286
- type: nauc_mrr_at_3_max
value: 58.2855
- type: nauc_mrr_at_3_std
value: 18.0442
- type: nauc_mrr_at_3_diff1
value: 70.7787
- type: nauc_mrr_at_5_max
value: 58.875299999999996
- type: nauc_mrr_at_5_std
value: 18.276999999999997
- type: nauc_mrr_at_5_diff1
value: 70.7961
- type: nauc_mrr_at_10_max
value: 58.7896
- type: nauc_mrr_at_10_std
value: 18.697
- type: nauc_mrr_at_10_diff1
value: 70.759
- type: nauc_mrr_at_20_max
value: 58.7205
- type: nauc_mrr_at_20_std
value: 18.5786
- type: nauc_mrr_at_20_diff1
value: 70.74380000000001
- type: nauc_mrr_at_100_max
value: 58.64319999999999
- type: nauc_mrr_at_100_std
value: 18.418799999999997
- type: nauc_mrr_at_100_diff1
value: 70.9314
- type: nauc_mrr_at_1000_max
value: 58.634699999999995
- type: nauc_mrr_at_1000_std
value: 18.401999999999997
- type: nauc_mrr_at_1000_diff1
value: 70.9434
- type: main_score
value: 70.32300000000001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (hin-ara)
type: facebook/mlqa
config: hin-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 52.151
- type: ndcg_at_3
value: 63.644999999999996
- type: ndcg_at_5
value: 66.561
- type: ndcg_at_10
value: 69.059
- type: ndcg_at_20
value: 69.985
- type: ndcg_at_100
value: 71.643
- type: ndcg_at_1000
value: 71.801
- type: map_at_1
value: 52.151
- type: map_at_3
value: 60.753
- type: map_at_5
value: 62.392
- type: map_at_10
value: 63.461
- type: map_at_20
value: 63.702000000000005
- type: map_at_100
value: 63.954
- type: map_at_1000
value: 63.963
- type: recall_at_1
value: 52.151
- type: recall_at_3
value: 72.043
- type: recall_at_5
value: 79.032
- type: recall_at_10
value: 86.559
- type: recall_at_20
value: 90.323
- type: recall_at_100
value: 98.925
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 52.151
- type: precision_at_3
value: 24.014
- type: precision_at_5
value: 15.806000000000001
- type: precision_at_10
value: 8.656
- type: precision_at_20
value: 4.516
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 52.1505
- type: mrr_at_3
value: 60.752700000000004
- type: mrr_at_5
value: 62.3925
- type: mrr_at_10
value: 63.4607
- type: mrr_at_20
value: 63.702000000000005
- type: mrr_at_100
value: 63.953700000000005
- type: mrr_at_1000
value: 63.96340000000001
- type: nauc_ndcg_at_1_max
value: 49.414
- type: nauc_ndcg_at_1_std
value: 26.262400000000003
- type: nauc_ndcg_at_1_diff1
value: 54.0133
- type: nauc_ndcg_at_3_max
value: 54.1356
- type: nauc_ndcg_at_3_std
value: 30.669
- type: nauc_ndcg_at_3_diff1
value: 46.9126
- type: nauc_ndcg_at_5_max
value: 54.16570000000001
- type: nauc_ndcg_at_5_std
value: 31.907799999999998
- type: nauc_ndcg_at_5_diff1
value: 47.6523
- type: nauc_ndcg_at_10_max
value: 50.79
- type: nauc_ndcg_at_10_std
value: 28.937800000000003
- type: nauc_ndcg_at_10_diff1
value: 45.2259
- type: nauc_ndcg_at_20_max
value: 50.504400000000004
- type: nauc_ndcg_at_20_std
value: 29.454399999999996
- type: nauc_ndcg_at_20_diff1
value: 44.7774
- type: nauc_ndcg_at_100_max
value: 51.535799999999995
- type: nauc_ndcg_at_100_std
value: 29.2429
- type: nauc_ndcg_at_100_diff1
value: 47.5625
- type: nauc_ndcg_at_1000_max
value: 51.232299999999995
- type: nauc_ndcg_at_1000_std
value: 28.7314
- type: nauc_ndcg_at_1000_diff1
value: 47.7654
- type: nauc_map_at_1_max
value: 49.414
- type: nauc_map_at_1_std
value: 26.262400000000003
- type: nauc_map_at_1_diff1
value: 54.0133
- type: nauc_map_at_3_max
value: 52.367
- type: nauc_map_at_3_std
value: 28.741600000000002
- type: nauc_map_at_3_diff1
value: 48.7321
- type: nauc_map_at_5_max
value: 52.28660000000001
- type: nauc_map_at_5_std
value: 29.252899999999997
- type: nauc_map_at_5_diff1
value: 49.200300000000006
- type: nauc_map_at_10_max
value: 50.9833
- type: nauc_map_at_10_std
value: 28.0707
- type: nauc_map_at_10_diff1
value: 48.3651
- type: nauc_map_at_20_max
value: 50.9108
- type: nauc_map_at_20_std
value: 28.174300000000002
- type: nauc_map_at_20_diff1
value: 48.2832
- type: nauc_map_at_100_max
value: 51.0532
- type: nauc_map_at_100_std
value: 28.143099999999997
- type: nauc_map_at_100_diff1
value: 48.7424
- type: nauc_map_at_1000_max
value: 51.0382
- type: nauc_map_at_1000_std
value: 28.117900000000002
- type: nauc_map_at_1000_diff1
value: 48.752
- type: nauc_recall_at_1_max
value: 49.414
- type: nauc_recall_at_1_std
value: 26.262400000000003
- type: nauc_recall_at_1_diff1
value: 54.0133
- type: nauc_recall_at_3_max
value: 60.6724
- type: nauc_recall_at_3_std
value: 37.8962
- type: nauc_recall_at_3_diff1
value: 40.5005
- type: nauc_recall_at_5_max
value: 62.6191
- type: nauc_recall_at_5_std
value: 44.1519
- type: nauc_recall_at_5_diff1
value: 41.1881
- type: nauc_recall_at_10_max
value: 47.4454
- type: nauc_recall_at_10_std
value: 33.1899
- type: nauc_recall_at_10_diff1
value: 24.0447
- type: nauc_recall_at_20_max
value: 43.7071
- type: nauc_recall_at_20_std
value: 39.8658
- type: nauc_recall_at_20_diff1
value: 12.4499
- type: nauc_recall_at_100_max
value: 93.44839999999999
- type: nauc_recall_at_100_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 19.0591
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 49.414
- type: nauc_precision_at_1_std
value: 26.262400000000003
- type: nauc_precision_at_1_diff1
value: 54.0133
- type: nauc_precision_at_3_max
value: 60.6724
- type: nauc_precision_at_3_std
value: 37.8962
- type: nauc_precision_at_3_diff1
value: 40.5005
- type: nauc_precision_at_5_max
value: 62.6191
- type: nauc_precision_at_5_std
value: 44.1519
- type: nauc_precision_at_5_diff1
value: 41.1881
- type: nauc_precision_at_10_max
value: 47.4454
- type: nauc_precision_at_10_std
value: 33.1899
- type: nauc_precision_at_10_diff1
value: 24.0447
- type: nauc_precision_at_20_max
value: 43.7071
- type: nauc_precision_at_20_std
value: 39.8658
- type: nauc_precision_at_20_diff1
value: 12.4499
- type: nauc_precision_at_100_max
value: 93.44839999999999
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 19.0591
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 49.414
- type: nauc_mrr_at_1_std
value: 26.262400000000003
- type: nauc_mrr_at_1_diff1
value: 54.0133
- type: nauc_mrr_at_3_max
value: 52.367
- type: nauc_mrr_at_3_std
value: 28.741600000000002
- type: nauc_mrr_at_3_diff1
value: 48.7321
- type: nauc_mrr_at_5_max
value: 52.28660000000001
- type: nauc_mrr_at_5_std
value: 29.252899999999997
- type: nauc_mrr_at_5_diff1
value: 49.200300000000006
- type: nauc_mrr_at_10_max
value: 50.9833
- type: nauc_mrr_at_10_std
value: 28.0707
- type: nauc_mrr_at_10_diff1
value: 48.3651
- type: nauc_mrr_at_20_max
value: 50.9108
- type: nauc_mrr_at_20_std
value: 28.174300000000002
- type: nauc_mrr_at_20_diff1
value: 48.2832
- type: nauc_mrr_at_100_max
value: 51.0532
- type: nauc_mrr_at_100_std
value: 28.143099999999997
- type: nauc_mrr_at_100_diff1
value: 48.7424
- type: nauc_mrr_at_1000_max
value: 51.0382
- type: nauc_mrr_at_1000_std
value: 28.117900000000002
- type: nauc_mrr_at_1000_diff1
value: 48.752
- type: main_score
value: 69.059
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (vie-ara)
type: facebook/mlqa
config: vie-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 51.534
- type: ndcg_at_3
value: 61.24699999999999
- type: ndcg_at_5
value: 63.28
- type: ndcg_at_10
value: 65.712
- type: ndcg_at_20
value: 67.104
- type: ndcg_at_100
value: 69.376
- type: ndcg_at_1000
value: 69.553
- type: map_at_1
value: 51.534
- type: map_at_3
value: 58.691
- type: map_at_5
value: 59.826
- type: map_at_10
value: 60.86
- type: map_at_20
value: 61.24000000000001
- type: map_at_100
value: 61.546
- type: map_at_1000
value: 61.556
- type: recall_at_1
value: 51.534
- type: recall_at_3
value: 68.71199999999999
- type: recall_at_5
value: 73.61999999999999
- type: recall_at_10
value: 80.982
- type: recall_at_20
value: 86.503
- type: recall_at_100
value: 98.773
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 51.534
- type: precision_at_3
value: 22.904
- type: precision_at_5
value: 14.724
- type: precision_at_10
value: 8.097999999999999
- type: precision_at_20
value: 4.324999999999999
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 51.5337
- type: mrr_at_3
value: 58.6912
- type: mrr_at_5
value: 59.82619999999999
- type: mrr_at_10
value: 60.8596
- type: mrr_at_20
value: 61.2401
- type: mrr_at_100
value: 61.546299999999995
- type: mrr_at_1000
value: 61.5563
- type: nauc_ndcg_at_1_max
value: 61.617200000000004
- type: nauc_ndcg_at_1_std
value: 31.049599999999998
- type: nauc_ndcg_at_1_diff1
value: 63.227500000000006
- type: nauc_ndcg_at_3_max
value: 59.7893
- type: nauc_ndcg_at_3_std
value: 32.8623
- type: nauc_ndcg_at_3_diff1
value: 59.6656
- type: nauc_ndcg_at_5_max
value: 60.5831
- type: nauc_ndcg_at_5_std
value: 32.596599999999995
- type: nauc_ndcg_at_5_diff1
value: 59.4883
- type: nauc_ndcg_at_10_max
value: 62.497400000000006
- type: nauc_ndcg_at_10_std
value: 34.550599999999996
- type: nauc_ndcg_at_10_diff1
value: 59.155899999999995
- type: nauc_ndcg_at_20_max
value: 62.740899999999996
- type: nauc_ndcg_at_20_std
value: 36.7174
- type: nauc_ndcg_at_20_diff1
value: 58.0935
- type: nauc_ndcg_at_100_max
value: 61.864399999999996
- type: nauc_ndcg_at_100_std
value: 34.528
- type: nauc_ndcg_at_100_diff1
value: 59.4356
- type: nauc_ndcg_at_1000_max
value: 61.7297
- type: nauc_ndcg_at_1000_std
value: 34.083200000000005
- type: nauc_ndcg_at_1000_diff1
value: 59.516999999999996
- type: nauc_map_at_1_max
value: 61.617200000000004
- type: nauc_map_at_1_std
value: 31.049599999999998
- type: nauc_map_at_1_diff1
value: 63.227500000000006
- type: nauc_map_at_3_max
value: 60.293699999999994
- type: nauc_map_at_3_std
value: 32.2575
- type: nauc_map_at_3_diff1
value: 60.5793
- type: nauc_map_at_5_max
value: 60.801899999999996
- type: nauc_map_at_5_std
value: 32.2098
- type: nauc_map_at_5_diff1
value: 60.5253
- type: nauc_map_at_10_max
value: 61.565599999999996
- type: nauc_map_at_10_std
value: 32.8874
- type: nauc_map_at_10_diff1
value: 60.4275
- type: nauc_map_at_20_max
value: 61.602199999999996
- type: nauc_map_at_20_std
value: 33.4131
- type: nauc_map_at_20_diff1
value: 60.1488
- type: nauc_map_at_100_max
value: 61.4753
- type: nauc_map_at_100_std
value: 33.1531
- type: nauc_map_at_100_diff1
value: 60.2734
- type: nauc_map_at_1000_max
value: 61.4688
- type: nauc_map_at_1000_std
value: 33.1323
- type: nauc_map_at_1000_diff1
value: 60.278600000000004
- type: nauc_recall_at_1_max
value: 61.617200000000004
- type: nauc_recall_at_1_std
value: 31.049599999999998
- type: nauc_recall_at_1_diff1
value: 63.227500000000006
- type: nauc_recall_at_3_max
value: 58.0671
- type: nauc_recall_at_3_std
value: 34.976600000000005
- type: nauc_recall_at_3_diff1
value: 56.5781
- type: nauc_recall_at_5_max
value: 59.7593
- type: nauc_recall_at_5_std
value: 33.9046
- type: nauc_recall_at_5_diff1
value: 55.5195
- type: nauc_recall_at_10_max
value: 68.0843
- type: nauc_recall_at_10_std
value: 43.8292
- type: nauc_recall_at_10_diff1
value: 52.74100000000001
- type: nauc_recall_at_20_max
value: 72.26
- type: nauc_recall_at_20_std
value: 63.8486
- type: nauc_recall_at_20_diff1
value: 42.700700000000005
- type: nauc_recall_at_100_max
value: 79.5792
- type: nauc_recall_at_100_std
value: 93.4774
- type: nauc_recall_at_100_diff1
value: 49.547200000000004
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 61.617200000000004
- type: nauc_precision_at_1_std
value: 31.049599999999998
- type: nauc_precision_at_1_diff1
value: 63.227500000000006
- type: nauc_precision_at_3_max
value: 58.0671
- type: nauc_precision_at_3_std
value: 34.976600000000005
- type: nauc_precision_at_3_diff1
value: 56.5781
- type: nauc_precision_at_5_max
value: 59.7593
- type: nauc_precision_at_5_std
value: 33.9046
- type: nauc_precision_at_5_diff1
value: 55.5195
- type: nauc_precision_at_10_max
value: 68.0843
- type: nauc_precision_at_10_std
value: 43.8292
- type: nauc_precision_at_10_diff1
value: 52.74100000000001
- type: nauc_precision_at_20_max
value: 72.26
- type: nauc_precision_at_20_std
value: 63.8486
- type: nauc_precision_at_20_diff1
value: 42.700700000000005
- type: nauc_precision_at_100_max
value: 79.5792
- type: nauc_precision_at_100_std
value: 93.4774
- type: nauc_precision_at_100_diff1
value: 49.547200000000004
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 61.617200000000004
- type: nauc_mrr_at_1_std
value: 31.049599999999998
- type: nauc_mrr_at_1_diff1
value: 63.227500000000006
- type: nauc_mrr_at_3_max
value: 60.293699999999994
- type: nauc_mrr_at_3_std
value: 32.2575
- type: nauc_mrr_at_3_diff1
value: 60.5793
- type: nauc_mrr_at_5_max
value: 60.801899999999996
- type: nauc_mrr_at_5_std
value: 32.2098
- type: nauc_mrr_at_5_diff1
value: 60.5253
- type: nauc_mrr_at_10_max
value: 61.565599999999996
- type: nauc_mrr_at_10_std
value: 32.8874
- type: nauc_mrr_at_10_diff1
value: 60.4275
- type: nauc_mrr_at_20_max
value: 61.602199999999996
- type: nauc_mrr_at_20_std
value: 33.4131
- type: nauc_mrr_at_20_diff1
value: 60.1488
- type: nauc_mrr_at_100_max
value: 61.4753
- type: nauc_mrr_at_100_std
value: 33.1531
- type: nauc_mrr_at_100_diff1
value: 60.2734
- type: nauc_mrr_at_1000_max
value: 61.4688
- type: nauc_mrr_at_1000_std
value: 33.1323
- type: nauc_mrr_at_1000_diff1
value: 60.278600000000004
- type: main_score
value: 65.712
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (zho-ara)
type: facebook/mlqa
config: zho-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 49.468
- type: ndcg_at_3
value: 61.385
- type: ndcg_at_5
value: 63.858000000000004
- type: ndcg_at_10
value: 65.85499999999999
- type: ndcg_at_20
value: 68.014
- type: ndcg_at_100
value: 69.71300000000001
- type: ndcg_at_1000
value: 69.788
- type: map_at_1
value: 49.468
- type: map_at_3
value: 58.511
- type: map_at_5
value: 59.919999999999995
- type: map_at_10
value: 60.702999999999996
- type: map_at_20
value: 61.3
- type: map_at_100
value: 61.541000000000004
- type: map_at_1000
value: 61.545
- type: recall_at_1
value: 49.468
- type: recall_at_3
value: 69.681
- type: recall_at_5
value: 75.532
- type: recall_at_10
value: 81.915
- type: recall_at_20
value: 90.426
- type: recall_at_100
value: 99.468
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 49.468
- type: precision_at_3
value: 23.227
- type: precision_at_5
value: 15.106
- type: precision_at_10
value: 8.190999999999999
- type: precision_at_20
value: 4.521
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 49.4681
- type: mrr_at_3
value: 58.510600000000004
- type: mrr_at_5
value: 59.9202
- type: mrr_at_10
value: 60.703300000000006
- type: mrr_at_20
value: 61.30029999999999
- type: mrr_at_100
value: 61.54110000000001
- type: mrr_at_1000
value: 61.5451
- type: nauc_ndcg_at_1_max
value: 54.7345
- type: nauc_ndcg_at_1_std
value: 11.2512
- type: nauc_ndcg_at_1_diff1
value: 70.6991
- type: nauc_ndcg_at_3_max
value: 57.2006
- type: nauc_ndcg_at_3_std
value: 17.3244
- type: nauc_ndcg_at_3_diff1
value: 59.90220000000001
- type: nauc_ndcg_at_5_max
value: 58.880900000000004
- type: nauc_ndcg_at_5_std
value: 18.7365
- type: nauc_ndcg_at_5_diff1
value: 60.3304
- type: nauc_ndcg_at_10_max
value: 58.3229
- type: nauc_ndcg_at_10_std
value: 19.6983
- type: nauc_ndcg_at_10_diff1
value: 59.8994
- type: nauc_ndcg_at_20_max
value: 57.5958
- type: nauc_ndcg_at_20_std
value: 16.8184
- type: nauc_ndcg_at_20_diff1
value: 60.4564
- type: nauc_ndcg_at_100_max
value: 57.407300000000006
- type: nauc_ndcg_at_100_std
value: 17.0753
- type: nauc_ndcg_at_100_diff1
value: 62.3023
- type: nauc_ndcg_at_1000_max
value: 57.2677
- type: nauc_ndcg_at_1000_std
value: 16.8035
- type: nauc_ndcg_at_1000_diff1
value: 62.3891
- type: nauc_map_at_1_max
value: 54.7345
- type: nauc_map_at_1_std
value: 11.2512
- type: nauc_map_at_1_diff1
value: 70.6991
- type: nauc_map_at_3_max
value: 56.36409999999999
- type: nauc_map_at_3_std
value: 15.7645
- type: nauc_map_at_3_diff1
value: 62.83109999999999
- type: nauc_map_at_5_max
value: 57.2165
- type: nauc_map_at_5_std
value: 16.4827
- type: nauc_map_at_5_diff1
value: 63.129900000000006
- type: nauc_map_at_10_max
value: 56.964099999999995
- type: nauc_map_at_10_std
value: 16.713900000000002
- type: nauc_map_at_10_diff1
value: 63.033300000000004
- type: nauc_map_at_20_max
value: 56.8291
- type: nauc_map_at_20_std
value: 16.0261
- type: nauc_map_at_20_diff1
value: 63.2795
- type: nauc_map_at_100_max
value: 56.7943
- type: nauc_map_at_100_std
value: 16.0463
- type: nauc_map_at_100_diff1
value: 63.5264
- type: nauc_map_at_1000_max
value: 56.7884
- type: nauc_map_at_1000_std
value: 16.034699999999997
- type: nauc_map_at_1000_diff1
value: 63.5303
- type: nauc_recall_at_1_max
value: 54.7345
- type: nauc_recall_at_1_std
value: 11.2512
- type: nauc_recall_at_1_diff1
value: 70.6991
- type: nauc_recall_at_3_max
value: 60.1676
- type: nauc_recall_at_3_std
value: 22.659499999999998
- type: nauc_recall_at_3_diff1
value: 49.8032
- type: nauc_recall_at_5_max
value: 65.889
- type: nauc_recall_at_5_std
value: 27.8308
- type: nauc_recall_at_5_diff1
value: 49.3429
- type: nauc_recall_at_10_max
value: 65.3261
- type: nauc_recall_at_10_std
value: 35.828700000000005
- type: nauc_recall_at_10_diff1
value: 44.0245
- type: nauc_recall_at_20_max
value: 62.0154
- type: nauc_recall_at_20_std
value: 18.0916
- type: nauc_recall_at_20_diff1
value: 35.9279
- type: nauc_recall_at_100_max
value: 100.0
- type: nauc_recall_at_100_std
value: 100.0
- type: nauc_recall_at_100_diff1
value: 35.8386
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 54.7345
- type: nauc_precision_at_1_std
value: 11.2512
- type: nauc_precision_at_1_diff1
value: 70.6991
- type: nauc_precision_at_3_max
value: 60.1676
- type: nauc_precision_at_3_std
value: 22.659499999999998
- type: nauc_precision_at_3_diff1
value: 49.8032
- type: nauc_precision_at_5_max
value: 65.889
- type: nauc_precision_at_5_std
value: 27.8308
- type: nauc_precision_at_5_diff1
value: 49.3429
- type: nauc_precision_at_10_max
value: 65.3261
- type: nauc_precision_at_10_std
value: 35.828700000000005
- type: nauc_precision_at_10_diff1
value: 44.0245
- type: nauc_precision_at_20_max
value: 62.0154
- type: nauc_precision_at_20_std
value: 18.0916
- type: nauc_precision_at_20_diff1
value: 35.9279
- type: nauc_precision_at_100_max
value: 100.0
- type: nauc_precision_at_100_std
value: 100.0
- type: nauc_precision_at_100_diff1
value: 35.8386
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 54.7345
- type: nauc_mrr_at_1_std
value: 11.2512
- type: nauc_mrr_at_1_diff1
value: 70.6991
- type: nauc_mrr_at_3_max
value: 56.36409999999999
- type: nauc_mrr_at_3_std
value: 15.7645
- type: nauc_mrr_at_3_diff1
value: 62.83109999999999
- type: nauc_mrr_at_5_max
value: 57.2165
- type: nauc_mrr_at_5_std
value: 16.4827
- type: nauc_mrr_at_5_diff1
value: 63.129900000000006
- type: nauc_mrr_at_10_max
value: 56.964099999999995
- type: nauc_mrr_at_10_std
value: 16.713900000000002
- type: nauc_mrr_at_10_diff1
value: 63.033300000000004
- type: nauc_mrr_at_20_max
value: 56.8291
- type: nauc_mrr_at_20_std
value: 16.0261
- type: nauc_mrr_at_20_diff1
value: 63.2795
- type: nauc_mrr_at_100_max
value: 56.7943
- type: nauc_mrr_at_100_std
value: 16.0463
- type: nauc_mrr_at_100_diff1
value: 63.5264
- type: nauc_mrr_at_1000_max
value: 56.7884
- type: nauc_mrr_at_1000_std
value: 16.034699999999997
- type: nauc_mrr_at_1000_diff1
value: 63.5303
- type: main_score
value: 65.85499999999999
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-ara)
type: facebook/mlqa
config: ara-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 31.258000000000003
- type: ndcg_at_3
value: 38.134
- type: ndcg_at_5
value: 40.389
- type: ndcg_at_10
value: 42.781000000000006
- type: ndcg_at_20
value: 44.545
- type: ndcg_at_100
value: 47.325
- type: ndcg_at_1000
value: 49.282
- type: map_at_1
value: 31.249
- type: map_at_3
value: 36.424
- type: map_at_5
value: 37.671
- type: map_at_10
value: 38.663
- type: map_at_20
value: 39.152
- type: map_at_100
value: 39.521
- type: map_at_1000
value: 39.585
- type: recall_at_1
value: 31.249
- type: recall_at_3
value: 43.081
- type: recall_at_5
value: 48.575
- type: recall_at_10
value: 55.944
- type: recall_at_20
value: 62.882000000000005
- type: recall_at_100
value: 78.089
- type: recall_at_1000
value: 93.971
- type: precision_at_1
value: 31.258000000000003
- type: precision_at_3
value: 14.363000000000001
- type: precision_at_5
value: 9.717
- type: precision_at_10
value: 5.595
- type: precision_at_20
value: 3.145
- type: precision_at_100
value: 0.781
- type: precision_at_1000
value: 0.094
- type: mrr_at_1
value: 31.258200000000002
- type: mrr_at_3
value: 36.4335
- type: mrr_at_5
value: 37.6805
- type: mrr_at_10
value: 38.672200000000004
- type: mrr_at_20
value: 39.1614
- type: mrr_at_100
value: 39.5298
- type: mrr_at_1000
value: 39.5948
- type: nauc_ndcg_at_1_max
value: 50.8135
- type: nauc_ndcg_at_1_std
value: 9.5316
- type: nauc_ndcg_at_1_diff1
value: 56.077799999999996
- type: nauc_ndcg_at_3_max
value: 51.4486
- type: nauc_ndcg_at_3_std
value: 11.4698
- type: nauc_ndcg_at_3_diff1
value: 50.6076
- type: nauc_ndcg_at_5_max
value: 51.0535
- type: nauc_ndcg_at_5_std
value: 12.133
- type: nauc_ndcg_at_5_diff1
value: 49.0051
- type: nauc_ndcg_at_10_max
value: 51.324999999999996
- type: nauc_ndcg_at_10_std
value: 13.861299999999998
- type: nauc_ndcg_at_10_diff1
value: 48.4724
- type: nauc_ndcg_at_20_max
value: 51.07390000000001
- type: nauc_ndcg_at_20_std
value: 14.4511
- type: nauc_ndcg_at_20_diff1
value: 47.870200000000004
- type: nauc_ndcg_at_100_max
value: 51.4803
- type: nauc_ndcg_at_100_std
value: 15.289900000000001
- type: nauc_ndcg_at_100_diff1
value: 48.0109
- type: nauc_ndcg_at_1000_max
value: 51.4174
- type: nauc_ndcg_at_1000_std
value: 14.527399999999998
- type: nauc_ndcg_at_1000_diff1
value: 48.6374
- type: nauc_map_at_1_max
value: 50.768899999999995
- type: nauc_map_at_1_std
value: 9.501
- type: nauc_map_at_1_diff1
value: 56.049400000000006
- type: nauc_map_at_3_max
value: 51.27460000000001
- type: nauc_map_at_3_std
value: 10.922
- type: nauc_map_at_3_diff1
value: 51.8738
- type: nauc_map_at_5_max
value: 51.0655
- type: nauc_map_at_5_std
value: 11.282
- type: nauc_map_at_5_diff1
value: 51.0045
- type: nauc_map_at_10_max
value: 51.158899999999996
- type: nauc_map_at_10_std
value: 11.956
- type: nauc_map_at_10_diff1
value: 50.787099999999995
- type: nauc_map_at_20_max
value: 51.081500000000005
- type: nauc_map_at_20_std
value: 12.0977
- type: nauc_map_at_20_diff1
value: 50.6269
- type: nauc_map_at_100_max
value: 51.1262
- type: nauc_map_at_100_std
value: 12.1966
- type: nauc_map_at_100_diff1
value: 50.6523
- type: nauc_map_at_1000_max
value: 51.1258
- type: nauc_map_at_1000_std
value: 12.1769
- type: nauc_map_at_1000_diff1
value: 50.67230000000001
- type: nauc_recall_at_1_max
value: 50.768899999999995
- type: nauc_recall_at_1_std
value: 9.501
- type: nauc_recall_at_1_diff1
value: 56.049400000000006
- type: nauc_recall_at_3_max
value: 51.9034
- type: nauc_recall_at_3_std
value: 13.0311
- type: nauc_recall_at_3_diff1
value: 46.9878
- type: nauc_recall_at_5_max
value: 50.907500000000006
- type: nauc_recall_at_5_std
value: 14.695
- type: nauc_recall_at_5_diff1
value: 42.965900000000005
- type: nauc_recall_at_10_max
value: 51.871500000000005
- type: nauc_recall_at_10_std
value: 20.6095
- type: nauc_recall_at_10_diff1
value: 40.908899999999996
- type: nauc_recall_at_20_max
value: 50.8848
- type: nauc_recall_at_20_std
value: 23.9653
- type: nauc_recall_at_20_diff1
value: 37.5667
- type: nauc_recall_at_100_max
value: 54.52
- type: nauc_recall_at_100_std
value: 35.6453
- type: nauc_recall_at_100_diff1
value: 34.0519
- type: nauc_recall_at_1000_max
value: 58.397
- type: nauc_recall_at_1000_std
value: 49.6012
- type: nauc_recall_at_1000_diff1
value: 27.825699999999998
- type: nauc_precision_at_1_max
value: 50.8135
- type: nauc_precision_at_1_std
value: 9.5316
- type: nauc_precision_at_1_diff1
value: 56.077799999999996
- type: nauc_precision_at_3_max
value: 51.9505
- type: nauc_precision_at_3_std
value: 13.0616
- type: nauc_precision_at_3_diff1
value: 47.0194
- type: nauc_precision_at_5_max
value: 50.9555
- type: nauc_precision_at_5_std
value: 14.7261
- type: nauc_precision_at_5_diff1
value: 42.998
- type: nauc_precision_at_10_max
value: 51.926399999999994
- type: nauc_precision_at_10_std
value: 20.644399999999997
- type: nauc_precision_at_10_diff1
value: 40.9459
- type: nauc_precision_at_20_max
value: 50.9483
- type: nauc_precision_at_20_std
value: 24.0057
- type: nauc_precision_at_20_diff1
value: 37.6094
- type: nauc_precision_at_100_max
value: 54.5785
- type: nauc_precision_at_100_std
value: 35.7331
- type: nauc_precision_at_100_diff1
value: 34.098800000000004
- type: nauc_precision_at_1000_max
value: 58.599900000000005
- type: nauc_precision_at_1000_std
value: 49.8547
- type: nauc_precision_at_1000_diff1
value: 28.0201
- type: nauc_mrr_at_1_max
value: 50.8135
- type: nauc_mrr_at_1_std
value: 9.5316
- type: nauc_mrr_at_1_diff1
value: 56.077799999999996
- type: nauc_mrr_at_3_max
value: 51.3185
- type: nauc_mrr_at_3_std
value: 10.952
- type: nauc_mrr_at_3_diff1
value: 51.902
- type: nauc_mrr_at_5_max
value: 51.1095
- type: nauc_mrr_at_5_std
value: 11.3122
- type: nauc_mrr_at_5_diff1
value: 51.0328
- type: nauc_mrr_at_10_max
value: 51.2033
- type: nauc_mrr_at_10_std
value: 11.9863
- type: nauc_mrr_at_10_diff1
value: 50.8157
- type: nauc_mrr_at_20_max
value: 51.1262
- type: nauc_mrr_at_20_std
value: 12.1282
- type: nauc_mrr_at_20_diff1
value: 50.6557
- type: nauc_mrr_at_100_max
value: 51.169799999999995
- type: nauc_mrr_at_100_std
value: 12.2269
- type: nauc_mrr_at_100_diff1
value: 50.6806
- type: nauc_mrr_at_1000_max
value: 51.1695
- type: nauc_mrr_at_1000_std
value: 12.2072
- type: nauc_mrr_at_1000_diff1
value: 50.700599999999994
- type: main_score
value: 42.781000000000006
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-deu)
type: facebook/mlqa
config: ara-deu
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 38.774
- type: ndcg_at_3
value: 47.213
- type: ndcg_at_5
value: 50.19
- type: ndcg_at_10
value: 52.71
- type: ndcg_at_20
value: 54.429
- type: ndcg_at_100
value: 56.69
- type: ndcg_at_1000
value: 58.214
- type: map_at_1
value: 38.774
- type: map_at_3
value: 45.161
- type: map_at_5
value: 46.814
- type: map_at_10
value: 47.848
- type: map_at_20
value: 48.32
- type: map_at_100
value: 48.620999999999995
- type: map_at_1000
value: 48.678
- type: recall_at_1
value: 38.774
- type: recall_at_3
value: 53.125
- type: recall_at_5
value: 60.346
- type: recall_at_10
value: 68.174
- type: recall_at_20
value: 74.97
- type: recall_at_100
value: 87.318
- type: recall_at_1000
value: 99.333
- type: precision_at_1
value: 38.774
- type: precision_at_3
value: 17.718
- type: precision_at_5
value: 12.075
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_20
value: 3.75
- type: precision_at_100
value: 0.874
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 38.774300000000004
- type: mrr_at_3
value: 45.176
- type: mrr_at_5
value: 46.8295
- type: mrr_at_10
value: 47.8628
- type: mrr_at_20
value: 48.3352
- type: mrr_at_100
value: 48.6348
- type: mrr_at_1000
value: 48.692099999999996
- type: nauc_ndcg_at_1_max
value: 53.3984
- type: nauc_ndcg_at_1_std
value: 11.4226
- type: nauc_ndcg_at_1_diff1
value: 63.624
- type: nauc_ndcg_at_3_max
value: 53.212199999999996
- type: nauc_ndcg_at_3_std
value: 12.8275
- type: nauc_ndcg_at_3_diff1
value: 56.9653
- type: nauc_ndcg_at_5_max
value: 52.9301
- type: nauc_ndcg_at_5_std
value: 13.019900000000002
- type: nauc_ndcg_at_5_diff1
value: 56.2881
- type: nauc_ndcg_at_10_max
value: 53.21
- type: nauc_ndcg_at_10_std
value: 14.477899999999998
- type: nauc_ndcg_at_10_diff1
value: 55.312
- type: nauc_ndcg_at_20_max
value: 53.5602
- type: nauc_ndcg_at_20_std
value: 15.2451
- type: nauc_ndcg_at_20_diff1
value: 55.5818
- type: nauc_ndcg_at_100_max
value: 53.466499999999996
- type: nauc_ndcg_at_100_std
value: 15.035799999999998
- type: nauc_ndcg_at_100_diff1
value: 56.2241
- type: nauc_ndcg_at_1000_max
value: 53.4527
- type: nauc_ndcg_at_1000_std
value: 14.2771
- type: nauc_ndcg_at_1000_diff1
value: 56.8137
- type: nauc_map_at_1_max
value: 53.3984
- type: nauc_map_at_1_std
value: 11.4226
- type: nauc_map_at_1_diff1
value: 63.624
- type: nauc_map_at_3_max
value: 53.3564
- type: nauc_map_at_3_std
value: 12.5543
- type: nauc_map_at_3_diff1
value: 58.557199999999995
- type: nauc_map_at_5_max
value: 53.2292
- type: nauc_map_at_5_std
value: 12.6335
- type: nauc_map_at_5_diff1
value: 58.2353
- type: nauc_map_at_10_max
value: 53.36450000000001
- type: nauc_map_at_10_std
value: 13.2102
- type: nauc_map_at_10_diff1
value: 57.89450000000001
- type: nauc_map_at_20_max
value: 53.438900000000004
- type: nauc_map_at_20_std
value: 13.374600000000001
- type: nauc_map_at_20_diff1
value: 57.9695
- type: nauc_map_at_100_max
value: 53.411699999999996
- type: nauc_map_at_100_std
value: 13.3329
- type: nauc_map_at_100_diff1
value: 58.04899999999999
- type: nauc_map_at_1000_max
value: 53.4104
- type: nauc_map_at_1000_std
value: 13.313600000000001
- type: nauc_map_at_1000_diff1
value: 58.0651
- type: nauc_recall_at_1_max
value: 53.3984
- type: nauc_recall_at_1_std
value: 11.4226
- type: nauc_recall_at_1_diff1
value: 63.624
- type: nauc_recall_at_3_max
value: 52.747299999999996
- type: nauc_recall_at_3_std
value: 13.602900000000002
- type: nauc_recall_at_3_diff1
value: 52.2385
- type: nauc_recall_at_5_max
value: 51.8513
- type: nauc_recall_at_5_std
value: 14.263300000000001
- type: nauc_recall_at_5_diff1
value: 49.971700000000006
- type: nauc_recall_at_10_max
value: 52.5828
- type: nauc_recall_at_10_std
value: 19.8161
- type: nauc_recall_at_10_diff1
value: 45.2543
- type: nauc_recall_at_20_max
value: 54.559400000000004
- type: nauc_recall_at_20_std
value: 25.3807
- type: nauc_recall_at_20_diff1
value: 44.8606
- type: nauc_recall_at_100_max
value: 54.732400000000005
- type: nauc_recall_at_100_std
value: 30.830000000000002
- type: nauc_recall_at_100_diff1
value: 45.0631
- type: nauc_recall_at_1000_max
value: 75.4921
- type: nauc_recall_at_1000_std
value: 35.5406
- type: nauc_recall_at_1000_diff1
value: 57.560900000000004
- type: nauc_precision_at_1_max
value: 53.3984
- type: nauc_precision_at_1_std
value: 11.4226
- type: nauc_precision_at_1_diff1
value: 63.624
- type: nauc_precision_at_3_max
value: 52.7321
- type: nauc_precision_at_3_std
value: 13.622600000000002
- type: nauc_precision_at_3_diff1
value: 52.2056
- type: nauc_precision_at_5_max
value: 51.8444
- type: nauc_precision_at_5_std
value: 14.287600000000001
- type: nauc_precision_at_5_diff1
value: 49.9448
- type: nauc_precision_at_10_max
value: 52.575300000000006
- type: nauc_precision_at_10_std
value: 19.8478
- type: nauc_precision_at_10_diff1
value: 45.2201
- type: nauc_precision_at_20_max
value: 54.564299999999996
- type: nauc_precision_at_20_std
value: 25.4289
- type: nauc_precision_at_20_diff1
value: 44.829299999999996
- type: nauc_precision_at_100_max
value: 54.0934
- type: nauc_precision_at_100_std
value: 30.652
- type: nauc_precision_at_100_diff1
value: 44.410500000000006
- type: nauc_precision_at_1000_max
value: 62.376
- type: nauc_precision_at_1000_std
value: 32.0345
- type: nauc_precision_at_1000_diff1
value: 45.353500000000004
- type: nauc_mrr_at_1_max
value: 53.3984
- type: nauc_mrr_at_1_std
value: 11.4226
- type: nauc_mrr_at_1_diff1
value: 63.624
- type: nauc_mrr_at_3_max
value: 53.3455
- type: nauc_mrr_at_3_std
value: 12.5627
- type: nauc_mrr_at_3_diff1
value: 58.5384
- type: nauc_mrr_at_5_max
value: 53.2182
- type: nauc_mrr_at_5_std
value: 12.642100000000001
- type: nauc_mrr_at_5_diff1
value: 58.216100000000004
- type: nauc_mrr_at_10_max
value: 53.353300000000004
- type: nauc_mrr_at_10_std
value: 13.219
- type: nauc_mrr_at_10_diff1
value: 57.875
- type: nauc_mrr_at_20_max
value: 53.4276
- type: nauc_mrr_at_20_std
value: 13.383500000000002
- type: nauc_mrr_at_20_diff1
value: 57.949799999999996
- type: nauc_mrr_at_100_max
value: 53.40089999999999
- type: nauc_mrr_at_100_std
value: 13.3411
- type: nauc_mrr_at_100_diff1
value: 58.030300000000004
- type: nauc_mrr_at_1000_max
value: 53.3996
- type: nauc_mrr_at_1000_std
value: 13.3218
- type: nauc_mrr_at_1000_diff1
value: 58.0465
- type: main_score
value: 52.71
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-eng)
type: facebook/mlqa
config: ara-eng
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 31.563999999999997
- type: ndcg_at_3
value: 39.35
- type: ndcg_at_5
value: 41.817
- type: ndcg_at_10
value: 44.275
- type: ndcg_at_20
value: 46.278000000000006
- type: ndcg_at_100
value: 49.04
- type: ndcg_at_1000
value: 50.897000000000006
- type: map_at_1
value: 31.563999999999997
- type: map_at_3
value: 37.456
- type: map_at_5
value: 38.824
- type: map_at_10
value: 39.843
- type: map_at_20
value: 40.400000000000006
- type: map_at_100
value: 40.783
- type: map_at_1000
value: 40.847
- type: recall_at_1
value: 31.563999999999997
- type: recall_at_3
value: 44.824000000000005
- type: recall_at_5
value: 50.806
- type: recall_at_10
value: 58.382999999999996
- type: recall_at_20
value: 66.251
- type: recall_at_100
value: 81.123
- type: recall_at_1000
value: 96.08
- type: precision_at_1
value: 31.563999999999997
- type: precision_at_3
value: 14.940999999999999
- type: precision_at_5
value: 10.165000000000001
- type: precision_at_10
value: 5.84
- type: precision_at_20
value: 3.314
- type: precision_at_100
value: 0.812
- type: precision_at_1000
value: 0.096
- type: mrr_at_1
value: 31.5641
- type: mrr_at_3
value: 37.4562
- type: mrr_at_5
value: 38.8281
- type: mrr_at_10
value: 39.847
- type: mrr_at_20
value: 40.4043
- type: mrr_at_100
value: 40.787099999999995
- type: mrr_at_1000
value: 40.8507
- type: nauc_ndcg_at_1_max
value: 45.0961
- type: nauc_ndcg_at_1_std
value: 6.0832999999999995
- type: nauc_ndcg_at_1_diff1
value: 56.4542
- type: nauc_ndcg_at_3_max
value: 45.8009
- type: nauc_ndcg_at_3_std
value: 7.946599999999999
- type: nauc_ndcg_at_3_diff1
value: 50.22990000000001
- type: nauc_ndcg_at_5_max
value: 45.7759
- type: nauc_ndcg_at_5_std
value: 8.793
- type: nauc_ndcg_at_5_diff1
value: 48.47
- type: nauc_ndcg_at_10_max
value: 45.896100000000004
- type: nauc_ndcg_at_10_std
value: 9.767900000000001
- type: nauc_ndcg_at_10_diff1
value: 47.862500000000004
- type: nauc_ndcg_at_20_max
value: 45.9985
- type: nauc_ndcg_at_20_std
value: 10.7251
- type: nauc_ndcg_at_20_diff1
value: 47.3885
- type: nauc_ndcg_at_100_max
value: 46.1803
- type: nauc_ndcg_at_100_std
value: 11.471
- type: nauc_ndcg_at_100_diff1
value: 47.6423
- type: nauc_ndcg_at_1000_max
value: 45.9962
- type: nauc_ndcg_at_1000_std
value: 10.4737
- type: nauc_ndcg_at_1000_diff1
value: 48.4473
- type: nauc_map_at_1_max
value: 45.0961
- type: nauc_map_at_1_std
value: 6.0832999999999995
- type: nauc_map_at_1_diff1
value: 56.4542
- type: nauc_map_at_3_max
value: 45.685199999999995
- type: nauc_map_at_3_std
value: 7.498199999999999
- type: nauc_map_at_3_diff1
value: 51.702999999999996
- type: nauc_map_at_5_max
value: 45.6663
- type: nauc_map_at_5_std
value: 7.9673
- type: nauc_map_at_5_diff1
value: 50.723
- type: nauc_map_at_10_max
value: 45.7094
- type: nauc_map_at_10_std
value: 8.3551
- type: nauc_map_at_10_diff1
value: 50.497099999999996
- type: nauc_map_at_20_max
value: 45.738299999999995
- type: nauc_map_at_20_std
value: 8.587
- type: nauc_map_at_20_diff1
value: 50.386900000000004
- type: nauc_map_at_100_max
value: 45.7463
- type: nauc_map_at_100_std
value: 8.6732
- type: nauc_map_at_100_diff1
value: 50.4202
- type: nauc_map_at_1000_max
value: 45.7398
- type: nauc_map_at_1000_std
value: 8.6477
- type: nauc_map_at_1000_diff1
value: 50.443599999999996
- type: nauc_recall_at_1_max
value: 45.0961
- type: nauc_recall_at_1_std
value: 6.0832999999999995
- type: nauc_recall_at_1_diff1
value: 56.4542
- type: nauc_recall_at_3_max
value: 46.110299999999995
- type: nauc_recall_at_3_std
value: 9.2308
- type: nauc_recall_at_3_diff1
value: 46.0213
- type: nauc_recall_at_5_max
value: 46.0402
- type: nauc_recall_at_5_std
value: 11.305900000000001
- type: nauc_recall_at_5_diff1
value: 41.6502
- type: nauc_recall_at_10_max
value: 46.4824
- type: nauc_recall_at_10_std
value: 14.7249
- type: nauc_recall_at_10_diff1
value: 39.0873
- type: nauc_recall_at_20_max
value: 47.0124
- type: nauc_recall_at_20_std
value: 20.002
- type: nauc_recall_at_20_diff1
value: 35.6458
- type: nauc_recall_at_100_max
value: 49.6722
- type: nauc_recall_at_100_std
value: 32.310100000000006
- type: nauc_recall_at_100_diff1
value: 31.805
- type: nauc_recall_at_1000_max
value: 50.651599999999995
- type: nauc_recall_at_1000_std
value: 40.5728
- type: nauc_recall_at_1000_diff1
value: 27.4545
- type: nauc_precision_at_1_max
value: 45.0961
- type: nauc_precision_at_1_std
value: 6.0832999999999995
- type: nauc_precision_at_1_diff1
value: 56.4542
- type: nauc_precision_at_3_max
value: 46.110299999999995
- type: nauc_precision_at_3_std
value: 9.2308
- type: nauc_precision_at_3_diff1
value: 46.0213
- type: nauc_precision_at_5_max
value: 46.1272
- type: nauc_precision_at_5_std
value: 11.351700000000001
- type: nauc_precision_at_5_diff1
value: 41.6701
- type: nauc_precision_at_10_max
value: 46.5768
- type: nauc_precision_at_10_std
value: 14.7753
- type: nauc_precision_at_10_diff1
value: 39.108399999999996
- type: nauc_precision_at_20_max
value: 47.123599999999996
- type: nauc_precision_at_20_std
value: 20.0731
- type: nauc_precision_at_20_diff1
value: 35.6993
- type: nauc_precision_at_100_max
value: 49.7989
- type: nauc_precision_at_100_std
value: 32.385999999999996
- type: nauc_precision_at_100_diff1
value: 31.779000000000003
- type: nauc_precision_at_1000_max
value: 50.600100000000005
- type: nauc_precision_at_1000_std
value: 40.419
- type: nauc_precision_at_1000_diff1
value: 27.254099999999998
- type: nauc_mrr_at_1_max
value: 45.0961
- type: nauc_mrr_at_1_std
value: 6.0832999999999995
- type: nauc_mrr_at_1_diff1
value: 56.4542
- type: nauc_mrr_at_3_max
value: 45.685199999999995
- type: nauc_mrr_at_3_std
value: 7.498199999999999
- type: nauc_mrr_at_3_diff1
value: 51.702999999999996
- type: nauc_mrr_at_5_max
value: 45.6835
- type: nauc_mrr_at_5_std
value: 7.9763
- type: nauc_mrr_at_5_diff1
value: 50.7273
- type: nauc_mrr_at_10_max
value: 45.7267
- type: nauc_mrr_at_10_std
value: 8.364099999999999
- type: nauc_mrr_at_10_diff1
value: 50.5014
- type: nauc_mrr_at_20_max
value: 45.7556
- type: nauc_mrr_at_20_std
value: 8.5966
- type: nauc_mrr_at_20_diff1
value: 50.393
- type: nauc_mrr_at_100_max
value: 45.760400000000004
- type: nauc_mrr_at_100_std
value: 8.6807
- type: nauc_mrr_at_100_diff1
value: 50.425799999999995
- type: nauc_mrr_at_1000_max
value: 45.753899999999994
- type: nauc_mrr_at_1000_std
value: 8.655100000000001
- type: nauc_mrr_at_1000_diff1
value: 50.448899999999995
- type: main_score
value: 44.275
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-spa)
type: facebook/mlqa
config: ara-spa
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 36.35
- type: ndcg_at_3
value: 44.869
- type: ndcg_at_5
value: 47.461999999999996
- type: ndcg_at_10
value: 50.101
- type: ndcg_at_20
value: 52.002
- type: ndcg_at_100
value: 54.449999999999996
- type: ndcg_at_1000
value: 56.084999999999994
- type: map_at_1
value: 36.35
- type: map_at_3
value: 42.796
- type: map_at_5
value: 44.242
- type: map_at_10
value: 45.344
- type: map_at_20
value: 45.87
- type: map_at_100
value: 46.202
- type: map_at_1000
value: 46.262
- type: recall_at_1
value: 36.35
- type: recall_at_3
value: 50.859
- type: recall_at_5
value: 57.128
- type: recall_at_10
value: 65.217
- type: recall_at_20
value: 72.7
- type: recall_at_100
value: 85.996
- type: recall_at_1000
value: 98.989
- type: precision_at_1
value: 36.35
- type: precision_at_3
value: 16.953
- type: precision_at_5
value: 11.426
- type: precision_at_10
value: 6.522
- type: precision_at_20
value: 3.6350000000000002
- type: precision_at_100
value: 0.86
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 36.3498
- type: mrr_at_3
value: 42.7958
- type: mrr_at_5
value: 44.2417
- type: mrr_at_10
value: 45.3442
- type: mrr_at_20
value: 45.8705
- type: mrr_at_100
value: 46.2022
- type: mrr_at_1000
value: 46.261799999999994
- type: nauc_ndcg_at_1_max
value: 49.756
- type: nauc_ndcg_at_1_std
value: 8.7422
- type: nauc_ndcg_at_1_diff1
value: 60.206199999999995
- type: nauc_ndcg_at_3_max
value: 51.8621
- type: nauc_ndcg_at_3_std
value: 11.6268
- type: nauc_ndcg_at_3_diff1
value: 53.6381
- type: nauc_ndcg_at_5_max
value: 52.5281
- type: nauc_ndcg_at_5_std
value: 12.8893
- type: nauc_ndcg_at_5_diff1
value: 52.311099999999996
- type: nauc_ndcg_at_10_max
value: 52.7753
- type: nauc_ndcg_at_10_std
value: 14.358699999999999
- type: nauc_ndcg_at_10_diff1
value: 51.960300000000004
- type: nauc_ndcg_at_20_max
value: 52.880700000000004
- type: nauc_ndcg_at_20_std
value: 15.427
- type: nauc_ndcg_at_20_diff1
value: 51.6363
- type: nauc_ndcg_at_100_max
value: 52.317800000000005
- type: nauc_ndcg_at_100_std
value: 14.510000000000002
- type: nauc_ndcg_at_100_diff1
value: 52.2435
- type: nauc_ndcg_at_1000_max
value: 52.1913
- type: nauc_ndcg_at_1000_std
value: 13.5793
- type: nauc_ndcg_at_1000_diff1
value: 52.95910000000001
- type: nauc_map_at_1_max
value: 49.756
- type: nauc_map_at_1_std
value: 8.7422
- type: nauc_map_at_1_diff1
value: 60.206199999999995
- type: nauc_map_at_3_max
value: 51.3348
- type: nauc_map_at_3_std
value: 10.7914
- type: nauc_map_at_3_diff1
value: 55.191100000000006
- type: nauc_map_at_5_max
value: 51.6705
- type: nauc_map_at_5_std
value: 11.4773
- type: nauc_map_at_5_diff1
value: 54.46959999999999
- type: nauc_map_at_10_max
value: 51.7134
- type: nauc_map_at_10_std
value: 11.9884
- type: nauc_map_at_10_diff1
value: 54.341300000000004
- type: nauc_map_at_20_max
value: 51.734100000000005
- type: nauc_map_at_20_std
value: 12.2386
- type: nauc_map_at_20_diff1
value: 54.2967
- type: nauc_map_at_100_max
value: 51.6624
- type: nauc_map_at_100_std
value: 12.1183
- type: nauc_map_at_100_diff1
value: 54.379999999999995
- type: nauc_map_at_1000_max
value: 51.661
- type: nauc_map_at_1000_std
value: 12.0917
- type: nauc_map_at_1000_diff1
value: 54.4056
- type: nauc_recall_at_1_max
value: 49.756
- type: nauc_recall_at_1_std
value: 8.7422
- type: nauc_recall_at_1_diff1
value: 60.206199999999995
- type: nauc_recall_at_3_max
value: 53.41590000000001
- type: nauc_recall_at_3_std
value: 14.1345
- type: nauc_recall_at_3_diff1
value: 49.0993
- type: nauc_recall_at_5_max
value: 55.3167
- type: nauc_recall_at_5_std
value: 17.4988
- type: nauc_recall_at_5_diff1
value: 45.4789
- type: nauc_recall_at_10_max
value: 56.843900000000005
- type: nauc_recall_at_10_std
value: 23.6997
- type: nauc_recall_at_10_diff1
value: 43.419799999999995
- type: nauc_recall_at_20_max
value: 58.146699999999996
- type: nauc_recall_at_20_std
value: 31.131199999999996
- type: nauc_recall_at_20_diff1
value: 39.9097
- type: nauc_recall_at_100_max
value: 55.3601
- type: nauc_recall_at_100_std
value: 31.958399999999997
- type: nauc_recall_at_100_diff1
value: 38.465700000000005
- type: nauc_recall_at_1000_max
value: 56.1925
- type: nauc_recall_at_1000_std
value: 25.717299999999998
- type: nauc_recall_at_1000_diff1
value: 25.905099999999997
- type: nauc_precision_at_1_max
value: 49.756
- type: nauc_precision_at_1_std
value: 8.7422
- type: nauc_precision_at_1_diff1
value: 60.206199999999995
- type: nauc_precision_at_3_max
value: 53.41590000000001
- type: nauc_precision_at_3_std
value: 14.1345
- type: nauc_precision_at_3_diff1
value: 49.0993
- type: nauc_precision_at_5_max
value: 55.3167
- type: nauc_precision_at_5_std
value: 17.4988
- type: nauc_precision_at_5_diff1
value: 45.4789
- type: nauc_precision_at_10_max
value: 56.843900000000005
- type: nauc_precision_at_10_std
value: 23.6997
- type: nauc_precision_at_10_diff1
value: 43.419799999999995
- type: nauc_precision_at_20_max
value: 58.146699999999996
- type: nauc_precision_at_20_std
value: 31.131199999999996
- type: nauc_precision_at_20_diff1
value: 39.9097
- type: nauc_precision_at_100_max
value: 55.3601
- type: nauc_precision_at_100_std
value: 31.958399999999997
- type: nauc_precision_at_100_diff1
value: 38.465700000000005
- type: nauc_precision_at_1000_max
value: 56.1925
- type: nauc_precision_at_1000_std
value: 25.717299999999998
- type: nauc_precision_at_1000_diff1
value: 25.905099999999997
- type: nauc_mrr_at_1_max
value: 49.756
- type: nauc_mrr_at_1_std
value: 8.7422
- type: nauc_mrr_at_1_diff1
value: 60.206199999999995
- type: nauc_mrr_at_3_max
value: 51.3348
- type: nauc_mrr_at_3_std
value: 10.7914
- type: nauc_mrr_at_3_diff1
value: 55.191100000000006
- type: nauc_mrr_at_5_max
value: 51.6705
- type: nauc_mrr_at_5_std
value: 11.4773
- type: nauc_mrr_at_5_diff1
value: 54.46959999999999
- type: nauc_mrr_at_10_max
value: 51.7134
- type: nauc_mrr_at_10_std
value: 11.9884
- type: nauc_mrr_at_10_diff1
value: 54.341300000000004
- type: nauc_mrr_at_20_max
value: 51.734100000000005
- type: nauc_mrr_at_20_std
value: 12.2386
- type: nauc_mrr_at_20_diff1
value: 54.2967
- type: nauc_mrr_at_100_max
value: 51.6624
- type: nauc_mrr_at_100_std
value: 12.1183
- type: nauc_mrr_at_100_diff1
value: 54.379999999999995
- type: nauc_mrr_at_1000_max
value: 51.661
- type: nauc_mrr_at_1000_std
value: 12.0917
- type: nauc_mrr_at_1000_diff1
value: 54.4056
- type: main_score
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-hin)
type: facebook/mlqa
config: ara-hin
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 33.097
- type: ndcg_at_3
value: 41.56
- type: ndcg_at_5
value: 44.437
- type: ndcg_at_10
value: 47.157
- type: ndcg_at_20
value: 49.370999999999995
- type: ndcg_at_100
value: 52.11
- type: ndcg_at_1000
value: 53.746
- type: map_at_1
value: 33.097
- type: map_at_3
value: 39.532000000000004
- type: map_at_5
value: 41.141
- type: map_at_10
value: 42.253
- type: map_at_20
value: 42.861
- type: map_at_100
value: 43.228
- type: map_at_1000
value: 43.288
- type: recall_at_1
value: 33.097
- type: recall_at_3
value: 47.406
- type: recall_at_5
value: 54.342
- type: recall_at_10
value: 62.807
- type: recall_at_20
value: 71.54599999999999
- type: recall_at_100
value: 86.50999999999999
- type: recall_at_1000
value: 99.454
- type: precision_at_1
value: 33.097
- type: precision_at_3
value: 15.802
- type: precision_at_5
value: 10.868
- type: precision_at_10
value: 6.281000000000001
- type: precision_at_20
value: 3.5770000000000004
- type: precision_at_100
value: 0.865
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 33.0967
- type: mrr_at_3
value: 39.5321
- type: mrr_at_5
value: 41.1405
- type: mrr_at_10
value: 42.2528
- type: mrr_at_20
value: 42.8615
- type: mrr_at_100
value: 43.2277
- type: mrr_at_1000
value: 43.2878
- type: nauc_ndcg_at_1_max
value: 41.5056
- type: nauc_ndcg_at_1_std
value: -0.7759
- type: nauc_ndcg_at_1_diff1
value: 54.4983
- type: nauc_ndcg_at_3_max
value: 43.7037
- type: nauc_ndcg_at_3_std
value: 0.9127
- type: nauc_ndcg_at_3_diff1
value: 48.093399999999995
- type: nauc_ndcg_at_5_max
value: 44.412600000000005
- type: nauc_ndcg_at_5_std
value: 2.7959
- type: nauc_ndcg_at_5_diff1
value: 47.2115
- type: nauc_ndcg_at_10_max
value: 45.1547
- type: nauc_ndcg_at_10_std
value: 4.5252
- type: nauc_ndcg_at_10_diff1
value: 46.35
- type: nauc_ndcg_at_20_max
value: 45.3115
- type: nauc_ndcg_at_20_std
value: 5.2706
- type: nauc_ndcg_at_20_diff1
value: 46.6213
- type: nauc_ndcg_at_100_max
value: 45.4305
- type: nauc_ndcg_at_100_std
value: 5.226299999999999
- type: nauc_ndcg_at_100_diff1
value: 47.2901
- type: nauc_ndcg_at_1000_max
value: 44.7915
- type: nauc_ndcg_at_1000_std
value: 4.0262
- type: nauc_ndcg_at_1000_diff1
value: 47.800599999999996
- type: nauc_map_at_1_max
value: 41.5056
- type: nauc_map_at_1_std
value: -0.7759
- type: nauc_map_at_1_diff1
value: 54.4983
- type: nauc_map_at_3_max
value: 43.2876
- type: nauc_map_at_3_std
value: 0.5027
- type: nauc_map_at_3_diff1
value: 49.6127
- type: nauc_map_at_5_max
value: 43.688900000000004
- type: nauc_map_at_5_std
value: 1.5645
- type: nauc_map_at_5_diff1
value: 49.1502
- type: nauc_map_at_10_max
value: 43.9749
- type: nauc_map_at_10_std
value: 2.2498
- type: nauc_map_at_10_diff1
value: 48.827
- type: nauc_map_at_20_max
value: 44.0064
- type: nauc_map_at_20_std
value: 2.4167
- type: nauc_map_at_20_diff1
value: 48.9157
- type: nauc_map_at_100_max
value: 44.0336
- type: nauc_map_at_100_std
value: 2.4309000000000003
- type: nauc_map_at_100_diff1
value: 48.997600000000006
- type: nauc_map_at_1000_max
value: 44.016
- type: nauc_map_at_1000_std
value: 2.3993
- type: nauc_map_at_1000_diff1
value: 49.016799999999996
- type: nauc_recall_at_1_max
value: 41.5056
- type: nauc_recall_at_1_std
value: -0.7759
- type: nauc_recall_at_1_diff1
value: 54.4983
- type: nauc_recall_at_3_max
value: 44.857200000000006
- type: nauc_recall_at_3_std
value: 2.0964
- type: nauc_recall_at_3_diff1
value: 43.721199999999996
- type: nauc_recall_at_5_max
value: 46.6269
- type: nauc_recall_at_5_std
value: 6.746
- type: nauc_recall_at_5_diff1
value: 41.2489
- type: nauc_recall_at_10_max
value: 49.47
- type: nauc_recall_at_10_std
value: 13.1434
- type: nauc_recall_at_10_diff1
value: 37.5806
- type: nauc_recall_at_20_max
value: 51.146100000000004
- type: nauc_recall_at_20_std
value: 18.7664
- type: nauc_recall_at_20_diff1
value: 37.2469
- type: nauc_recall_at_100_max
value: 57.036500000000004
- type: nauc_recall_at_100_std
value: 28.7043
- type: nauc_recall_at_100_diff1
value: 37.934200000000004
- type: nauc_recall_at_1000_max
value: 44.6101
- type: nauc_recall_at_1000_std
value: 37.7026
- type: nauc_recall_at_1000_diff1
value: 31.8598
- type: nauc_precision_at_1_max
value: 41.5056
- type: nauc_precision_at_1_std
value: -0.7759
- type: nauc_precision_at_1_diff1
value: 54.4983
- type: nauc_precision_at_3_max
value: 44.857200000000006
- type: nauc_precision_at_3_std
value: 2.0964
- type: nauc_precision_at_3_diff1
value: 43.721199999999996
- type: nauc_precision_at_5_max
value: 46.6269
- type: nauc_precision_at_5_std
value: 6.746
- type: nauc_precision_at_5_diff1
value: 41.2489
- type: nauc_precision_at_10_max
value: 49.47
- type: nauc_precision_at_10_std
value: 13.1434
- type: nauc_precision_at_10_diff1
value: 37.5806
- type: nauc_precision_at_20_max
value: 51.146100000000004
- type: nauc_precision_at_20_std
value: 18.7664
- type: nauc_precision_at_20_diff1
value: 37.2469
- type: nauc_precision_at_100_max
value: 57.036500000000004
- type: nauc_precision_at_100_std
value: 28.7043
- type: nauc_precision_at_100_diff1
value: 37.934200000000004
- type: nauc_precision_at_1000_max
value: 44.6101
- type: nauc_precision_at_1000_std
value: 37.7026
- type: nauc_precision_at_1000_diff1
value: 31.8598
- type: nauc_mrr_at_1_max
value: 41.5056
- type: nauc_mrr_at_1_std
value: -0.7759
- type: nauc_mrr_at_1_diff1
value: 54.4983
- type: nauc_mrr_at_3_max
value: 43.2876
- type: nauc_mrr_at_3_std
value: 0.5027
- type: nauc_mrr_at_3_diff1
value: 49.6127
- type: nauc_mrr_at_5_max
value: 43.688900000000004
- type: nauc_mrr_at_5_std
value: 1.5645
- type: nauc_mrr_at_5_diff1
value: 49.1502
- type: nauc_mrr_at_10_max
value: 43.9749
- type: nauc_mrr_at_10_std
value: 2.2498
- type: nauc_mrr_at_10_diff1
value: 48.827
- type: nauc_mrr_at_20_max
value: 44.0064
- type: nauc_mrr_at_20_std
value: 2.4167
- type: nauc_mrr_at_20_diff1
value: 48.9157
- type: nauc_mrr_at_100_max
value: 44.0336
- type: nauc_mrr_at_100_std
value: 2.4309000000000003
- type: nauc_mrr_at_100_diff1
value: 48.997600000000006
- type: nauc_mrr_at_1000_max
value: 44.016
- type: nauc_mrr_at_1000_std
value: 2.3993
- type: nauc_mrr_at_1000_diff1
value: 49.016799999999996
- type: main_score
value: 47.157
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-vie)
type: facebook/mlqa
config: ara-vie
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 33.122
- type: ndcg_at_3
value: 41.82
- type: ndcg_at_5
value: 44.2
- type: ndcg_at_10
value: 46.912
- type: ndcg_at_20
value: 49.124
- type: ndcg_at_100
value: 51.806
- type: ndcg_at_1000
value: 53.474999999999994
- type: map_at_1
value: 33.122
- type: map_at_3
value: 39.692
- type: map_at_5
value: 41.016000000000005
- type: map_at_10
value: 42.161
- type: map_at_20
value: 42.774
- type: map_at_100
value: 43.139
- type: map_at_1000
value: 43.2
- type: recall_at_1
value: 33.122
- type: recall_at_3
value: 47.973
- type: recall_at_5
value: 53.737
- type: recall_at_10
value: 61.992999999999995
- type: recall_at_20
value: 70.68900000000001
- type: recall_at_100
value: 85.247
- type: recall_at_1000
value: 98.48599999999999
- type: precision_at_1
value: 33.122
- type: precision_at_3
value: 15.991
- type: precision_at_5
value: 10.747
- type: precision_at_10
value: 6.199000000000001
- type: precision_at_20
value: 3.5340000000000003
- type: precision_at_100
value: 0.852
- type: precision_at_1000
value: 0.098
- type: mrr_at_1
value: 33.1216
- type: mrr_at_3
value: 39.6922
- type: mrr_at_5
value: 41.0161
- type: mrr_at_10
value: 42.160599999999995
- type: mrr_at_20
value: 42.774
- type: mrr_at_100
value: 43.1385
- type: mrr_at_1000
value: 43.199799999999996
- type: nauc_ndcg_at_1_max
value: 49.1834
- type: nauc_ndcg_at_1_std
value: 6.8612
- type: nauc_ndcg_at_1_diff1
value: 55.1215
- type: nauc_ndcg_at_3_max
value: 48.7315
- type: nauc_ndcg_at_3_std
value: 8.5129
- type: nauc_ndcg_at_3_diff1
value: 46.6492
- type: nauc_ndcg_at_5_max
value: 48.8836
- type: nauc_ndcg_at_5_std
value: 9.5124
- type: nauc_ndcg_at_5_diff1
value: 45.9731
- type: nauc_ndcg_at_10_max
value: 48.403
- type: nauc_ndcg_at_10_std
value: 10.4213
- type: nauc_ndcg_at_10_diff1
value: 45.522800000000004
- type: nauc_ndcg_at_20_max
value: 48.4306
- type: nauc_ndcg_at_20_std
value: 11.264299999999999
- type: nauc_ndcg_at_20_diff1
value: 45.2984
- type: nauc_ndcg_at_100_max
value: 48.7782
- type: nauc_ndcg_at_100_std
value: 11.4887
- type: nauc_ndcg_at_100_diff1
value: 45.7048
- type: nauc_ndcg_at_1000_max
value: 48.6585
- type: nauc_ndcg_at_1000_std
value: 10.5363
- type: nauc_ndcg_at_1000_diff1
value: 46.3558
- type: nauc_map_at_1_max
value: 49.1834
- type: nauc_map_at_1_std
value: 6.8612
- type: nauc_map_at_1_diff1
value: 55.1215
- type: nauc_map_at_3_max
value: 48.8541
- type: nauc_map_at_3_std
value: 8.035
- type: nauc_map_at_3_diff1
value: 48.606899999999996
- type: nauc_map_at_5_max
value: 48.916399999999996
- type: nauc_map_at_5_std
value: 8.5605
- type: nauc_map_at_5_diff1
value: 48.2496
- type: nauc_map_at_10_max
value: 48.7073
- type: nauc_map_at_10_std
value: 8.9177
- type: nauc_map_at_10_diff1
value: 48.0922
- type: nauc_map_at_20_max
value: 48.714200000000005
- type: nauc_map_at_20_std
value: 9.1213
- type: nauc_map_at_20_diff1
value: 48.0531
- type: nauc_map_at_100_max
value: 48.7618
- type: nauc_map_at_100_std
value: 9.157
- type: nauc_map_at_100_diff1
value: 48.0993
- type: nauc_map_at_1000_max
value: 48.762299999999996
- type: nauc_map_at_1000_std
value: 9.1389
- type: nauc_map_at_1000_diff1
value: 48.1273
- type: nauc_recall_at_1_max
value: 49.1834
- type: nauc_recall_at_1_std
value: 6.8612
- type: nauc_recall_at_1_diff1
value: 55.1215
- type: nauc_recall_at_3_max
value: 48.372
- type: nauc_recall_at_3_std
value: 9.9262
- type: nauc_recall_at_3_diff1
value: 41.0295
- type: nauc_recall_at_5_max
value: 48.8314
- type: nauc_recall_at_5_std
value: 12.5722
- type: nauc_recall_at_5_diff1
value: 39.0983
- type: nauc_recall_at_10_max
value: 47.281099999999995
- type: nauc_recall_at_10_std
value: 15.9864
- type: nauc_recall_at_10_diff1
value: 36.842999999999996
- type: nauc_recall_at_20_max
value: 47.2339
- type: nauc_recall_at_20_std
value: 21.2964
- type: nauc_recall_at_20_diff1
value: 34.102
- type: nauc_recall_at_100_max
value: 50.4448
- type: nauc_recall_at_100_std
value: 31.2116
- type: nauc_recall_at_100_diff1
value: 30.873099999999997
- type: nauc_recall_at_1000_max
value: 41.048899999999996
- type: nauc_recall_at_1000_std
value: 33.9471
- type: nauc_recall_at_1000_diff1
value: 1.6271
- type: nauc_precision_at_1_max
value: 49.1834
- type: nauc_precision_at_1_std
value: 6.8612
- type: nauc_precision_at_1_diff1
value: 55.1215
- type: nauc_precision_at_3_max
value: 48.372
- type: nauc_precision_at_3_std
value: 9.9262
- type: nauc_precision_at_3_diff1
value: 41.0295
- type: nauc_precision_at_5_max
value: 48.8314
- type: nauc_precision_at_5_std
value: 12.5722
- type: nauc_precision_at_5_diff1
value: 39.0983
- type: nauc_precision_at_10_max
value: 47.281099999999995
- type: nauc_precision_at_10_std
value: 15.9864
- type: nauc_precision_at_10_diff1
value: 36.842999999999996
- type: nauc_precision_at_20_max
value: 47.2339
- type: nauc_precision_at_20_std
value: 21.2964
- type: nauc_precision_at_20_diff1
value: 34.102
- type: nauc_precision_at_100_max
value: 50.4448
- type: nauc_precision_at_100_std
value: 31.2116
- type: nauc_precision_at_100_diff1
value: 30.873099999999997
- type: nauc_precision_at_1000_max
value: 41.048899999999996
- type: nauc_precision_at_1000_std
value: 33.9471
- type: nauc_precision_at_1000_diff1
value: 1.6271
- type: nauc_mrr_at_1_max
value: 49.1834
- type: nauc_mrr_at_1_std
value: 6.8612
- type: nauc_mrr_at_1_diff1
value: 55.1215
- type: nauc_mrr_at_3_max
value: 48.8541
- type: nauc_mrr_at_3_std
value: 8.035
- type: nauc_mrr_at_3_diff1
value: 48.606899999999996
- type: nauc_mrr_at_5_max
value: 48.916399999999996
- type: nauc_mrr_at_5_std
value: 8.5605
- type: nauc_mrr_at_5_diff1
value: 48.2496
- type: nauc_mrr_at_10_max
value: 48.7073
- type: nauc_mrr_at_10_std
value: 8.9177
- type: nauc_mrr_at_10_diff1
value: 48.0922
- type: nauc_mrr_at_20_max
value: 48.714200000000005
- type: nauc_mrr_at_20_std
value: 9.1213
- type: nauc_mrr_at_20_diff1
value: 48.0531
- type: nauc_mrr_at_100_max
value: 48.7618
- type: nauc_mrr_at_100_std
value: 9.157
- type: nauc_mrr_at_100_diff1
value: 48.0993
- type: nauc_mrr_at_1000_max
value: 48.762299999999996
- type: nauc_mrr_at_1000_std
value: 9.1389
- type: nauc_mrr_at_1000_diff1
value: 48.1273
- type: main_score
value: 46.912
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-zho)
type: facebook/mlqa
config: ara-zho
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 34.467
- type: ndcg_at_3
value: 42.643
- type: ndcg_at_5
value: 45.559
- type: ndcg_at_10
value: 48.274
- type: ndcg_at_20
value: 50.107
- type: ndcg_at_100
value: 52.93
- type: ndcg_at_1000
value: 54.493
- type: map_at_1
value: 34.467
- type: map_at_3
value: 40.672999999999995
- type: map_at_5
value: 42.284
- type: map_at_10
value: 43.418
- type: map_at_20
value: 43.926
- type: map_at_100
value: 44.296
- type: map_at_1000
value: 44.352000000000004
- type: recall_at_1
value: 34.467
- type: recall_at_3
value: 48.326
- type: recall_at_5
value: 55.43900000000001
- type: recall_at_10
value: 63.754999999999995
- type: recall_at_20
value: 70.973
- type: recall_at_100
value: 86.454
- type: recall_at_1000
value: 98.902
- type: precision_at_1
value: 34.467
- type: precision_at_3
value: 16.109
- type: precision_at_5
value: 11.088000000000001
- type: precision_at_10
value: 6.3759999999999994
- type: precision_at_20
value: 3.549
- type: precision_at_100
value: 0.865
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 34.466499999999996
- type: mrr_at_3
value: 40.6729
- type: mrr_at_5
value: 42.2838
- type: mrr_at_10
value: 43.4184
- type: mrr_at_20
value: 43.926
- type: mrr_at_100
value: 44.2962
- type: mrr_at_1000
value: 44.3522
- type: nauc_ndcg_at_1_max
value: 47.1153
- type: nauc_ndcg_at_1_std
value: 3.4273
- type: nauc_ndcg_at_1_diff1
value: 59.028000000000006
- type: nauc_ndcg_at_3_max
value: 47.509499999999996
- type: nauc_ndcg_at_3_std
value: 6.1509
- type: nauc_ndcg_at_3_diff1
value: 52.3682
- type: nauc_ndcg_at_5_max
value: 47.1969
- type: nauc_ndcg_at_5_std
value: 6.2892
- type: nauc_ndcg_at_5_diff1
value: 50.9259
- type: nauc_ndcg_at_10_max
value: 47.246500000000005
- type: nauc_ndcg_at_10_std
value: 7.1377
- type: nauc_ndcg_at_10_diff1
value: 50.049600000000005
- type: nauc_ndcg_at_20_max
value: 47.5816
- type: nauc_ndcg_at_20_std
value: 7.4744
- type: nauc_ndcg_at_20_diff1
value: 50.4117
- type: nauc_ndcg_at_100_max
value: 47.9685
- type: nauc_ndcg_at_100_std
value: 8.6481
- type: nauc_ndcg_at_100_diff1
value: 50.4111
- type: nauc_ndcg_at_1000_max
value: 47.7801
- type: nauc_ndcg_at_1000_std
value: 7.5201
- type: nauc_ndcg_at_1000_diff1
value: 51.4396
- type: nauc_map_at_1_max
value: 47.1153
- type: nauc_map_at_1_std
value: 3.4273
- type: nauc_map_at_1_diff1
value: 59.028000000000006
- type: nauc_map_at_3_max
value: 47.475
- type: nauc_map_at_3_std
value: 5.5253
- type: nauc_map_at_3_diff1
value: 53.9536
- type: nauc_map_at_5_max
value: 47.2987
- type: nauc_map_at_5_std
value: 5.6127
- type: nauc_map_at_5_diff1
value: 53.151700000000005
- type: nauc_map_at_10_max
value: 47.307300000000005
- type: nauc_map_at_10_std
value: 5.9255
- type: nauc_map_at_10_diff1
value: 52.8381
- type: nauc_map_at_20_max
value: 47.3942
- type: nauc_map_at_20_std
value: 5.992100000000001
- type: nauc_map_at_20_diff1
value: 52.9637
- type: nauc_map_at_100_max
value: 47.448800000000006
- type: nauc_map_at_100_std
value: 6.1400999999999994
- type: nauc_map_at_100_diff1
value: 52.97690000000001
- type: nauc_map_at_1000_max
value: 47.4484
- type: nauc_map_at_1000_std
value: 6.1112
- type: nauc_map_at_1000_diff1
value: 53.0145
- type: nauc_recall_at_1_max
value: 47.1153
- type: nauc_recall_at_1_std
value: 3.4273
- type: nauc_recall_at_1_diff1
value: 59.028000000000006
- type: nauc_recall_at_3_max
value: 47.5843
- type: nauc_recall_at_3_std
value: 7.9499
- type: nauc_recall_at_3_diff1
value: 47.7843
- type: nauc_recall_at_5_max
value: 46.8183
- type: nauc_recall_at_5_std
value: 8.3286
- type: nauc_recall_at_5_diff1
value: 43.9835
- type: nauc_recall_at_10_max
value: 47.025099999999995
- type: nauc_recall_at_10_std
value: 11.6536
- type: nauc_recall_at_10_diff1
value: 40.012100000000004
- type: nauc_recall_at_20_max
value: 48.6934
- type: nauc_recall_at_20_std
value: 14.212
- type: nauc_recall_at_20_diff1
value: 40.1054
- type: nauc_recall_at_100_max
value: 54.1462
- type: nauc_recall_at_100_std
value: 34.3519
- type: nauc_recall_at_100_diff1
value: 30.826900000000002
- type: nauc_recall_at_1000_max
value: 71.5059
- type: nauc_recall_at_1000_std
value: 62.956599999999995
- type: nauc_recall_at_1000_diff1
value: 26.123800000000003
- type: nauc_precision_at_1_max
value: 47.1153
- type: nauc_precision_at_1_std
value: 3.4273
- type: nauc_precision_at_1_diff1
value: 59.028000000000006
- type: nauc_precision_at_3_max
value: 47.5843
- type: nauc_precision_at_3_std
value: 7.9499
- type: nauc_precision_at_3_diff1
value: 47.7843
- type: nauc_precision_at_5_max
value: 46.8183
- type: nauc_precision_at_5_std
value: 8.3286
- type: nauc_precision_at_5_diff1
value: 43.9835
- type: nauc_precision_at_10_max
value: 47.025099999999995
- type: nauc_precision_at_10_std
value: 11.6536
- type: nauc_precision_at_10_diff1
value: 40.012100000000004
- type: nauc_precision_at_20_max
value: 48.6934
- type: nauc_precision_at_20_std
value: 14.212
- type: nauc_precision_at_20_diff1
value: 40.1054
- type: nauc_precision_at_100_max
value: 54.1462
- type: nauc_precision_at_100_std
value: 34.3519
- type: nauc_precision_at_100_diff1
value: 30.826900000000002
- type: nauc_precision_at_1000_max
value: 71.5059
- type: nauc_precision_at_1000_std
value: 62.956599999999995
- type: nauc_precision_at_1000_diff1
value: 26.123800000000003
- type: nauc_mrr_at_1_max
value: 47.1153
- type: nauc_mrr_at_1_std
value: 3.4273
- type: nauc_mrr_at_1_diff1
value: 59.028000000000006
- type: nauc_mrr_at_3_max
value: 47.475
- type: nauc_mrr_at_3_std
value: 5.5253
- type: nauc_mrr_at_3_diff1
value: 53.9536
- type: nauc_mrr_at_5_max
value: 47.2987
- type: nauc_mrr_at_5_std
value: 5.6127
- type: nauc_mrr_at_5_diff1
value: 53.151700000000005
- type: nauc_mrr_at_10_max
value: 47.307300000000005
- type: nauc_mrr_at_10_std
value: 5.9255
- type: nauc_mrr_at_10_diff1
value: 52.8381
- type: nauc_mrr_at_20_max
value: 47.3942
- type: nauc_mrr_at_20_std
value: 5.992100000000001
- type: nauc_mrr_at_20_diff1
value: 52.9637
- type: nauc_mrr_at_100_max
value: 47.448800000000006
- type: nauc_mrr_at_100_std
value: 6.1400999999999994
- type: nauc_mrr_at_100_diff1
value: 52.97690000000001
- type: nauc_mrr_at_1000_max
value: 47.4484
- type: nauc_mrr_at_1000_std
value: 6.1112
- type: nauc_mrr_at_1000_diff1
value: 53.0145
- type: main_score
value: 48.274
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-ara)
type: facebook/mlqa
config: deu-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 29.958000000000002
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 40.552
- type: ndcg_at_10
value: 43.376999999999995
- type: ndcg_at_20
value: 45.613
- type: ndcg_at_100
value: 48.671
- type: ndcg_at_1000
value: 50.554
- type: map_at_1
value: 29.958000000000002
- type: map_at_3
value: 35.86
- type: map_at_5
value: 37.391000000000005
- type: map_at_10
value: 38.557
- type: map_at_20
value: 39.162
- type: map_at_100
value: 39.581
- type: map_at_1000
value: 39.647
- type: recall_at_1
value: 29.958000000000002
- type: recall_at_3
value: 43.36
- type: recall_at_5
value: 50.090999999999994
- type: recall_at_10
value: 58.824
- type: recall_at_20
value: 67.738
- type: recall_at_100
value: 84.294
- type: recall_at_1000
value: 99.394
- type: precision_at_1
value: 29.958000000000002
- type: precision_at_3
value: 14.453
- type: precision_at_5
value: 10.018
- type: precision_at_10
value: 5.882
- type: precision_at_20
value: 3.3869999999999996
- type: precision_at_100
value: 0.843
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 29.9576
- type: mrr_at_3
value: 35.8601
- type: mrr_at_5
value: 37.3913
- type: mrr_at_10
value: 38.5573
- type: mrr_at_20
value: 39.162
- type: mrr_at_100
value: 39.5807
- type: mrr_at_1000
value: 39.6467
- type: nauc_ndcg_at_1_max
value: 52.1125
- type: nauc_ndcg_at_1_std
value: 6.8635
- type: nauc_ndcg_at_1_diff1
value: 55.853699999999996
- type: nauc_ndcg_at_3_max
value: 51.9481
- type: nauc_ndcg_at_3_std
value: 10.0406
- type: nauc_ndcg_at_3_diff1
value: 49.3114
- type: nauc_ndcg_at_5_max
value: 51.730900000000005
- type: nauc_ndcg_at_5_std
value: 11.7259
- type: nauc_ndcg_at_5_diff1
value: 47.0463
- type: nauc_ndcg_at_10_max
value: 51.0169
- type: nauc_ndcg_at_10_std
value: 11.9733
- type: nauc_ndcg_at_10_diff1
value: 45.7934
- type: nauc_ndcg_at_20_max
value: 50.9552
- type: nauc_ndcg_at_20_std
value: 12.5508
- type: nauc_ndcg_at_20_diff1
value: 45.4673
- type: nauc_ndcg_at_100_max
value: 51.207800000000006
- type: nauc_ndcg_at_100_std
value: 12.7859
- type: nauc_ndcg_at_100_diff1
value: 46.4388
- type: nauc_ndcg_at_1000_max
value: 51.4648
- type: nauc_ndcg_at_1000_std
value: 11.9752
- type: nauc_ndcg_at_1000_diff1
value: 47.3814
- type: nauc_map_at_1_max
value: 52.1125
- type: nauc_map_at_1_std
value: 6.8635
- type: nauc_map_at_1_diff1
value: 55.853699999999996
- type: nauc_map_at_3_max
value: 52.0278
- type: nauc_map_at_3_std
value: 9.2962
- type: nauc_map_at_3_diff1
value: 50.8881
- type: nauc_map_at_5_max
value: 51.9123
- type: nauc_map_at_5_std
value: 10.2351
- type: nauc_map_at_5_diff1
value: 49.6413
- type: nauc_map_at_10_max
value: 51.6105
- type: nauc_map_at_10_std
value: 10.3094
- type: nauc_map_at_10_diff1
value: 49.1541
- type: nauc_map_at_20_max
value: 51.6124
- type: nauc_map_at_20_std
value: 10.4738
- type: nauc_map_at_20_diff1
value: 49.0843
- type: nauc_map_at_100_max
value: 51.660700000000006
- type: nauc_map_at_100_std
value: 10.5072
- type: nauc_map_at_100_diff1
value: 49.228699999999996
- type: nauc_map_at_1000_max
value: 51.673199999999994
- type: nauc_map_at_1000_std
value: 10.4973
- type: nauc_map_at_1000_diff1
value: 49.2533
- type: nauc_recall_at_1_max
value: 52.1125
- type: nauc_recall_at_1_std
value: 6.8635
- type: nauc_recall_at_1_diff1
value: 55.853699999999996
- type: nauc_recall_at_3_max
value: 51.7055
- type: nauc_recall_at_3_std
value: 12.1475
- type: nauc_recall_at_3_diff1
value: 44.8305
- type: nauc_recall_at_5_max
value: 51.1529
- type: nauc_recall_at_5_std
value: 16.2625
- type: nauc_recall_at_5_diff1
value: 39.211400000000005
- type: nauc_recall_at_10_max
value: 48.8181
- type: nauc_recall_at_10_std
value: 17.5707
- type: nauc_recall_at_10_diff1
value: 34.3632
- type: nauc_recall_at_20_max
value: 48.024899999999995
- type: nauc_recall_at_20_std
value: 21.0431
- type: nauc_recall_at_20_diff1
value: 30.9652
- type: nauc_recall_at_100_max
value: 47.9518
- type: nauc_recall_at_100_std
value: 29.650199999999998
- type: nauc_recall_at_100_diff1
value: 30.1396
- type: nauc_recall_at_1000_max
value: 56.8226
- type: nauc_recall_at_1000_std
value: 65.794
- type: nauc_recall_at_1000_diff1
value: 27.686899999999998
- type: nauc_precision_at_1_max
value: 52.1125
- type: nauc_precision_at_1_std
value: 6.8635
- type: nauc_precision_at_1_diff1
value: 55.853699999999996
- type: nauc_precision_at_3_max
value: 51.7055
- type: nauc_precision_at_3_std
value: 12.1475
- type: nauc_precision_at_3_diff1
value: 44.8305
- type: nauc_precision_at_5_max
value: 51.1529
- type: nauc_precision_at_5_std
value: 16.2625
- type: nauc_precision_at_5_diff1
value: 39.211400000000005
- type: nauc_precision_at_10_max
value: 48.8181
- type: nauc_precision_at_10_std
value: 17.5707
- type: nauc_precision_at_10_diff1
value: 34.3632
- type: nauc_precision_at_20_max
value: 48.024899999999995
- type: nauc_precision_at_20_std
value: 21.0431
- type: nauc_precision_at_20_diff1
value: 30.9652
- type: nauc_precision_at_100_max
value: 47.9518
- type: nauc_precision_at_100_std
value: 29.650199999999998
- type: nauc_precision_at_100_diff1
value: 30.1396
- type: nauc_precision_at_1000_max
value: 56.8226
- type: nauc_precision_at_1000_std
value: 65.794
- type: nauc_precision_at_1000_diff1
value: 27.686899999999998
- type: nauc_mrr_at_1_max
value: 52.1125
- type: nauc_mrr_at_1_std
value: 6.8635
- type: nauc_mrr_at_1_diff1
value: 55.853699999999996
- type: nauc_mrr_at_3_max
value: 52.0278
- type: nauc_mrr_at_3_std
value: 9.2962
- type: nauc_mrr_at_3_diff1
value: 50.8881
- type: nauc_mrr_at_5_max
value: 51.9123
- type: nauc_mrr_at_5_std
value: 10.2351
- type: nauc_mrr_at_5_diff1
value: 49.6413
- type: nauc_mrr_at_10_max
value: 51.6105
- type: nauc_mrr_at_10_std
value: 10.3094
- type: nauc_mrr_at_10_diff1
value: 49.1541
- type: nauc_mrr_at_20_max
value: 51.6124
- type: nauc_mrr_at_20_std
value: 10.4738
- type: nauc_mrr_at_20_diff1
value: 49.0843
- type: nauc_mrr_at_100_max
value: 51.660700000000006
- type: nauc_mrr_at_100_std
value: 10.5072
- type: nauc_mrr_at_100_diff1
value: 49.228699999999996
- type: nauc_mrr_at_1000_max
value: 51.673199999999994
- type: nauc_mrr_at_1000_std
value: 10.4973
- type: nauc_mrr_at_1000_diff1
value: 49.2533
- type: main_score
value: 43.376999999999995
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-ara)
type: facebook/mlqa
config: eng-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 26.663999999999998
- type: ndcg_at_3
value: 33.85
- type: ndcg_at_5
value: 36.004000000000005
- type: ndcg_at_10
value: 38.4
- type: ndcg_at_20
value: 40.338
- type: ndcg_at_100
value: 43.419000000000004
- type: ndcg_at_1000
value: 45.631
- type: map_at_1
value: 26.655
- type: map_at_3
value: 32.099
- type: map_at_5
value: 33.29
- type: map_at_10
value: 34.278999999999996
- type: map_at_20
value: 34.813
- type: map_at_100
value: 35.221000000000004
- type: map_at_1000
value: 35.297
- type: recall_at_1
value: 26.655
- type: recall_at_3
value: 38.899
- type: recall_at_5
value: 44.15
- type: recall_at_10
value: 51.556000000000004
- type: recall_at_20
value: 59.207
- type: recall_at_100
value: 76.074
- type: recall_at_1000
value: 93.915
- type: precision_at_1
value: 26.663999999999998
- type: precision_at_3
value: 12.97
- type: precision_at_5
value: 8.831999999999999
- type: precision_at_10
value: 5.157
- type: precision_at_20
value: 2.9610000000000003
- type: precision_at_100
value: 0.761
- type: precision_at_1000
value: 0.094
- type: mrr_at_1
value: 26.664199999999997
- type: mrr_at_3
value: 32.1083
- type: mrr_at_5
value: 33.299
- type: mrr_at_10
value: 34.2886
- type: mrr_at_20
value: 34.8219
- type: mrr_at_100
value: 35.2302
- type: mrr_at_1000
value: 35.3063
- type: nauc_ndcg_at_1_max
value: 48.4014
- type: nauc_ndcg_at_1_std
value: 11.304
- type: nauc_ndcg_at_1_diff1
value: 54.139199999999995
- type: nauc_ndcg_at_3_max
value: 49.1937
- type: nauc_ndcg_at_3_std
value: 13.9525
- type: nauc_ndcg_at_3_diff1
value: 48.137
- type: nauc_ndcg_at_5_max
value: 49.235299999999995
- type: nauc_ndcg_at_5_std
value: 15.0341
- type: nauc_ndcg_at_5_diff1
value: 46.8281
- type: nauc_ndcg_at_10_max
value: 48.9836
- type: nauc_ndcg_at_10_std
value: 15.8809
- type: nauc_ndcg_at_10_diff1
value: 45.3256
- type: nauc_ndcg_at_20_max
value: 48.924299999999995
- type: nauc_ndcg_at_20_std
value: 16.6435
- type: nauc_ndcg_at_20_diff1
value: 45.047
- type: nauc_ndcg_at_100_max
value: 49.1173
- type: nauc_ndcg_at_100_std
value: 17.5779
- type: nauc_ndcg_at_100_diff1
value: 45.285199999999996
- type: nauc_ndcg_at_1000_max
value: 49.2097
- type: nauc_ndcg_at_1000_std
value: 16.829900000000002
- type: nauc_ndcg_at_1000_diff1
value: 46.0814
- type: nauc_map_at_1_max
value: 48.3592
- type: nauc_map_at_1_std
value: 11.2728
- type: nauc_map_at_1_diff1
value: 54.098
- type: nauc_map_at_3_max
value: 49.0619
- type: nauc_map_at_3_std
value: 13.3646
- type: nauc_map_at_3_diff1
value: 49.473800000000004
- type: nauc_map_at_5_max
value: 49.0995
- type: nauc_map_at_5_std
value: 13.974900000000002
- type: nauc_map_at_5_diff1
value: 48.7481
- type: nauc_map_at_10_max
value: 49.0016
- type: nauc_map_at_10_std
value: 14.336099999999998
- type: nauc_map_at_10_diff1
value: 48.1301
- type: nauc_map_at_20_max
value: 48.9681
- type: nauc_map_at_20_std
value: 14.5174
- type: nauc_map_at_20_diff1
value: 48.0536
- type: nauc_map_at_100_max
value: 48.997299999999996
- type: nauc_map_at_100_std
value: 14.6347
- type: nauc_map_at_100_diff1
value: 48.0899
- type: nauc_map_at_1000_max
value: 49.0003
- type: nauc_map_at_1000_std
value: 14.6138
- type: nauc_map_at_1000_diff1
value: 48.1148
- type: nauc_recall_at_1_max
value: 48.3592
- type: nauc_recall_at_1_std
value: 11.2728
- type: nauc_recall_at_1_diff1
value: 54.098
- type: nauc_recall_at_3_max
value: 49.490899999999996
- type: nauc_recall_at_3_std
value: 15.5245
- type: nauc_recall_at_3_diff1
value: 44.4469
- type: nauc_recall_at_5_max
value: 49.53
- type: nauc_recall_at_5_std
value: 18.0626
- type: nauc_recall_at_5_diff1
value: 41.3084
- type: nauc_recall_at_10_max
value: 48.734899999999996
- type: nauc_recall_at_10_std
value: 20.7001
- type: nauc_recall_at_10_diff1
value: 36.5179
- type: nauc_recall_at_20_max
value: 48.6031
- type: nauc_recall_at_20_std
value: 24.435100000000002
- type: nauc_recall_at_20_diff1
value: 34.7265
- type: nauc_recall_at_100_max
value: 49.8486
- type: nauc_recall_at_100_std
value: 35.1908
- type: nauc_recall_at_100_diff1
value: 32.034400000000005
- type: nauc_recall_at_1000_max
value: 55.304500000000004
- type: nauc_recall_at_1000_std
value: 47.902
- type: nauc_recall_at_1000_diff1
value: 31.4755
- type: nauc_precision_at_1_max
value: 48.4014
- type: nauc_precision_at_1_std
value: 11.304
- type: nauc_precision_at_1_diff1
value: 54.139199999999995
- type: nauc_precision_at_3_max
value: 49.533899999999996
- type: nauc_precision_at_3_std
value: 15.553700000000001
- type: nauc_precision_at_3_diff1
value: 44.4901
- type: nauc_precision_at_5_max
value: 49.5772
- type: nauc_precision_at_5_std
value: 18.0933
- type: nauc_precision_at_5_diff1
value: 41.3553
- type: nauc_precision_at_10_max
value: 48.787000000000006
- type: nauc_precision_at_10_std
value: 20.7335
- type: nauc_precision_at_10_diff1
value: 36.5688
- type: nauc_precision_at_20_max
value: 48.6597
- type: nauc_precision_at_20_std
value: 24.4717
- type: nauc_precision_at_20_diff1
value: 34.781600000000005
- type: nauc_precision_at_100_max
value: 49.9243
- type: nauc_precision_at_100_std
value: 35.3133
- type: nauc_precision_at_100_diff1
value: 32.0868
- type: nauc_precision_at_1000_max
value: 55.517300000000006
- type: nauc_precision_at_1000_std
value: 48.249900000000004
- type: nauc_precision_at_1000_diff1
value: 31.736399999999996
- type: nauc_mrr_at_1_max
value: 48.4014
- type: nauc_mrr_at_1_std
value: 11.304
- type: nauc_mrr_at_1_diff1
value: 54.139199999999995
- type: nauc_mrr_at_3_max
value: 49.102000000000004
- type: nauc_mrr_at_3_std
value: 13.394
- type: nauc_mrr_at_3_diff1
value: 49.5138
- type: nauc_mrr_at_5_max
value: 49.1397
- type: nauc_mrr_at_5_std
value: 14.0043
- type: nauc_mrr_at_5_diff1
value: 48.7883
- type: nauc_mrr_at_10_max
value: 49.0419
- type: nauc_mrr_at_10_std
value: 14.3656
- type: nauc_mrr_at_10_diff1
value: 48.1706
- type: nauc_mrr_at_20_max
value: 49.0087
- type: nauc_mrr_at_20_std
value: 14.546999999999999
- type: nauc_mrr_at_20_diff1
value: 48.094300000000004
- type: nauc_mrr_at_100_max
value: 49.038
- type: nauc_mrr_at_100_std
value: 14.6651
- type: nauc_mrr_at_100_diff1
value: 48.1306
- type: nauc_mrr_at_1000_max
value: 49.0404
- type: nauc_mrr_at_1000_std
value: 14.6437
- type: nauc_mrr_at_1000_diff1
value: 48.1549
- type: main_score
value: 38.4
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-ara)
type: facebook/mlqa
config: spa-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 33.367000000000004
- type: ndcg_at_3
value: 42.068
- type: ndcg_at_5
value: 44.79
- type: ndcg_at_10
value: 47.372
- type: ndcg_at_20
value: 49.409
- type: ndcg_at_100
value: 52.25
- type: ndcg_at_1000
value: 53.857
- type: map_at_1
value: 33.367000000000004
- type: map_at_3
value: 39.922000000000004
- type: map_at_5
value: 41.429
- type: map_at_10
value: 42.504999999999995
- type: map_at_20
value: 43.073
- type: map_at_100
value: 43.475
- type: map_at_1000
value: 43.533
- type: recall_at_1
value: 33.367000000000004
- type: recall_at_3
value: 48.281
- type: recall_at_5
value: 54.903999999999996
- type: recall_at_10
value: 62.841
- type: recall_at_20
value: 70.829
- type: recall_at_100
value: 85.996
- type: recall_at_1000
value: 98.787
- type: precision_at_1
value: 33.367000000000004
- type: precision_at_3
value: 16.094
- type: precision_at_5
value: 10.981
- type: precision_at_10
value: 6.283999999999999
- type: precision_at_20
value: 3.5409999999999995
- type: precision_at_100
value: 0.86
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 33.367000000000004
- type: mrr_at_3
value: 39.9225
- type: mrr_at_5
value: 41.429100000000005
- type: mrr_at_10
value: 42.5047
- type: mrr_at_20
value: 43.0729
- type: mrr_at_100
value: 43.475
- type: mrr_at_1000
value: 43.5325
- type: nauc_ndcg_at_1_max
value: 49.215599999999995
- type: nauc_ndcg_at_1_std
value: 7.7847
- type: nauc_ndcg_at_1_diff1
value: 53.823600000000006
- type: nauc_ndcg_at_3_max
value: 51.518299999999996
- type: nauc_ndcg_at_3_std
value: 13.1004
- type: nauc_ndcg_at_3_diff1
value: 46.4315
- type: nauc_ndcg_at_5_max
value: 51.4275
- type: nauc_ndcg_at_5_std
value: 13.7658
- type: nauc_ndcg_at_5_diff1
value: 45.703700000000005
- type: nauc_ndcg_at_10_max
value: 51.417500000000004
- type: nauc_ndcg_at_10_std
value: 14.5612
- type: nauc_ndcg_at_10_diff1
value: 45.1057
- type: nauc_ndcg_at_20_max
value: 51.67850000000001
- type: nauc_ndcg_at_20_std
value: 15.228
- type: nauc_ndcg_at_20_diff1
value: 45.2585
- type: nauc_ndcg_at_100_max
value: 51.68509999999999
- type: nauc_ndcg_at_100_std
value: 15.265400000000001
- type: nauc_ndcg_at_100_diff1
value: 46.299600000000005
- type: nauc_ndcg_at_1000_max
value: 51.453199999999995
- type: nauc_ndcg_at_1000_std
value: 14.1539
- type: nauc_ndcg_at_1000_diff1
value: 46.7368
- type: nauc_map_at_1_max
value: 49.215599999999995
- type: nauc_map_at_1_std
value: 7.7847
- type: nauc_map_at_1_diff1
value: 53.823600000000006
- type: nauc_map_at_3_max
value: 51.047
- type: nauc_map_at_3_std
value: 11.772499999999999
- type: nauc_map_at_3_diff1
value: 48.3261
- type: nauc_map_at_5_max
value: 51.0005
- type: nauc_map_at_5_std
value: 12.1281
- type: nauc_map_at_5_diff1
value: 47.9407
- type: nauc_map_at_10_max
value: 50.968
- type: nauc_map_at_10_std
value: 12.4076
- type: nauc_map_at_10_diff1
value: 47.7427
- type: nauc_map_at_20_max
value: 51.0379
- type: nauc_map_at_20_std
value: 12.5755
- type: nauc_map_at_20_diff1
value: 47.824
- type: nauc_map_at_100_max
value: 51.045399999999994
- type: nauc_map_at_100_std
value: 12.5665
- type: nauc_map_at_100_diff1
value: 47.9852
- type: nauc_map_at_1000_max
value: 51.0328
- type: nauc_map_at_1000_std
value: 12.5251
- type: nauc_map_at_1000_diff1
value: 47.9978
- type: nauc_recall_at_1_max
value: 49.215599999999995
- type: nauc_recall_at_1_std
value: 7.7847
- type: nauc_recall_at_1_diff1
value: 53.823600000000006
- type: nauc_recall_at_3_max
value: 52.8468
- type: nauc_recall_at_3_std
value: 16.9595
- type: nauc_recall_at_3_diff1
value: 40.906
- type: nauc_recall_at_5_max
value: 52.6566
- type: nauc_recall_at_5_std
value: 18.8317
- type: nauc_recall_at_5_diff1
value: 38.7903
- type: nauc_recall_at_10_max
value: 52.9016
- type: nauc_recall_at_10_std
value: 22.2713
- type: nauc_recall_at_10_diff1
value: 35.8589
- type: nauc_recall_at_20_max
value: 54.415400000000005
- type: nauc_recall_at_20_std
value: 26.8639
- type: nauc_recall_at_20_diff1
value: 34.7889
- type: nauc_recall_at_100_max
value: 56.409200000000006
- type: nauc_recall_at_100_std
value: 37.061699999999995
- type: nauc_recall_at_100_diff1
value: 37.7855
- type: nauc_recall_at_1000_max
value: 66.6721
- type: nauc_recall_at_1000_std
value: 52.0995
- type: nauc_recall_at_1000_diff1
value: 38.8158
- type: nauc_precision_at_1_max
value: 49.215599999999995
- type: nauc_precision_at_1_std
value: 7.7847
- type: nauc_precision_at_1_diff1
value: 53.823600000000006
- type: nauc_precision_at_3_max
value: 52.8468
- type: nauc_precision_at_3_std
value: 16.9595
- type: nauc_precision_at_3_diff1
value: 40.906
- type: nauc_precision_at_5_max
value: 52.6566
- type: nauc_precision_at_5_std
value: 18.8317
- type: nauc_precision_at_5_diff1
value: 38.7903
- type: nauc_precision_at_10_max
value: 52.9016
- type: nauc_precision_at_10_std
value: 22.2713
- type: nauc_precision_at_10_diff1
value: 35.8589
- type: nauc_precision_at_20_max
value: 54.415400000000005
- type: nauc_precision_at_20_std
value: 26.8639
- type: nauc_precision_at_20_diff1
value: 34.7889
- type: nauc_precision_at_100_max
value: 56.409200000000006
- type: nauc_precision_at_100_std
value: 37.061699999999995
- type: nauc_precision_at_100_diff1
value: 37.7855
- type: nauc_precision_at_1000_max
value: 66.6721
- type: nauc_precision_at_1000_std
value: 52.0995
- type: nauc_precision_at_1000_diff1
value: 38.8158
- type: nauc_mrr_at_1_max
value: 49.215599999999995
- type: nauc_mrr_at_1_std
value: 7.7847
- type: nauc_mrr_at_1_diff1
value: 53.823600000000006
- type: nauc_mrr_at_3_max
value: 51.047
- type: nauc_mrr_at_3_std
value: 11.772499999999999
- type: nauc_mrr_at_3_diff1
value: 48.3261
- type: nauc_mrr_at_5_max
value: 51.0005
- type: nauc_mrr_at_5_std
value: 12.1281
- type: nauc_mrr_at_5_diff1
value: 47.9407
- type: nauc_mrr_at_10_max
value: 50.968
- type: nauc_mrr_at_10_std
value: 12.4076
- type: nauc_mrr_at_10_diff1
value: 47.7427
- type: nauc_mrr_at_20_max
value: 51.0379
- type: nauc_mrr_at_20_std
value: 12.5755
- type: nauc_mrr_at_20_diff1
value: 47.824
- type: nauc_mrr_at_100_max
value: 51.045399999999994
- type: nauc_mrr_at_100_std
value: 12.5665
- type: nauc_mrr_at_100_diff1
value: 47.9852
- type: nauc_mrr_at_1000_max
value: 51.0328
- type: nauc_mrr_at_1000_std
value: 12.5251
- type: nauc_mrr_at_1000_diff1
value: 47.9978
- type: main_score
value: 47.372
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (hin-ara)
type: facebook/mlqa
config: hin-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 30.639
- type: ndcg_at_3
value: 39.347
- type: ndcg_at_5
value: 42.077
- type: ndcg_at_10
value: 44.619
- type: ndcg_at_20
value: 46.698
- type: ndcg_at_100
value: 49.834
- type: ndcg_at_1000
value: 51.556999999999995
- type: map_at_1
value: 30.639
- type: map_at_3
value: 37.22
- type: map_at_5
value: 38.727000000000004
- type: map_at_10
value: 39.786
- type: map_at_20
value: 40.354
- type: map_at_100
value: 40.776
- type: map_at_1000
value: 40.841
- type: recall_at_1
value: 30.639
- type: recall_at_3
value: 45.494
- type: recall_at_5
value: 52.157
- type: recall_at_10
value: 59.967000000000006
- type: recall_at_20
value: 68.214
- type: recall_at_100
value: 85.309
- type: recall_at_1000
value: 98.908
- type: precision_at_1
value: 30.639
- type: precision_at_3
value: 15.165000000000001
- type: precision_at_5
value: 10.431
- type: precision_at_10
value: 5.997
- type: precision_at_20
value: 3.411
- type: precision_at_100
value: 0.853
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 30.639
- type: mrr_at_3
value: 37.2201
- type: mrr_at_5
value: 38.7275
- type: mrr_at_10
value: 39.7862
- type: mrr_at_20
value: 40.3536
- type: mrr_at_100
value: 40.7763
- type: mrr_at_1000
value: 40.8406
- type: nauc_ndcg_at_1_max
value: 47.3997
- type: nauc_ndcg_at_1_std
value: 4.6415
- type: nauc_ndcg_at_1_diff1
value: 55.2295
- type: nauc_ndcg_at_3_max
value: 51.1166
- type: nauc_ndcg_at_3_std
value: 8.8196
- type: nauc_ndcg_at_3_diff1
value: 49.119
- type: nauc_ndcg_at_5_max
value: 50.242200000000004
- type: nauc_ndcg_at_5_std
value: 8.5755
- type: nauc_ndcg_at_5_diff1
value: 47.6155
- type: nauc_ndcg_at_10_max
value: 50.213499999999996
- type: nauc_ndcg_at_10_std
value: 9.2496
- type: nauc_ndcg_at_10_diff1
value: 47.3074
- type: nauc_ndcg_at_20_max
value: 50.43299999999999
- type: nauc_ndcg_at_20_std
value: 9.2624
- type: nauc_ndcg_at_20_diff1
value: 47.249
- type: nauc_ndcg_at_100_max
value: 50.8598
- type: nauc_ndcg_at_100_std
value: 10.513300000000001
- type: nauc_ndcg_at_100_diff1
value: 47.928599999999996
- type: nauc_ndcg_at_1000_max
value: 50.3282
- type: nauc_ndcg_at_1000_std
value: 9.3475
- type: nauc_ndcg_at_1000_diff1
value: 48.4022
- type: nauc_map_at_1_max
value: 47.3997
- type: nauc_map_at_1_std
value: 4.6415
- type: nauc_map_at_1_diff1
value: 55.2295
- type: nauc_map_at_3_max
value: 50.33879999999999
- type: nauc_map_at_3_std
value: 8.0053
- type: nauc_map_at_3_diff1
value: 50.4792
- type: nauc_map_at_5_max
value: 49.7955
- type: nauc_map_at_5_std
value: 7.7969
- type: nauc_map_at_5_diff1
value: 49.6566
- type: nauc_map_at_10_max
value: 49.7532
- type: nauc_map_at_10_std
value: 8.032300000000001
- type: nauc_map_at_10_diff1
value: 49.548500000000004
- type: nauc_map_at_20_max
value: 49.8138
- type: nauc_map_at_20_std
value: 8.0091
- type: nauc_map_at_20_diff1
value: 49.5634
- type: nauc_map_at_100_max
value: 49.8475
- type: nauc_map_at_100_std
value: 8.132399999999999
- type: nauc_map_at_100_diff1
value: 49.6456
- type: nauc_map_at_1000_max
value: 49.830600000000004
- type: nauc_map_at_1000_std
value: 8.0998
- type: nauc_map_at_1000_diff1
value: 49.6603
- type: nauc_recall_at_1_max
value: 47.3997
- type: nauc_recall_at_1_std
value: 4.6415
- type: nauc_recall_at_1_diff1
value: 55.2295
- type: nauc_recall_at_3_max
value: 53.295899999999996
- type: nauc_recall_at_3_std
value: 11.0735
- type: nauc_recall_at_3_diff1
value: 45.2698
- type: nauc_recall_at_5_max
value: 51.4516
- type: nauc_recall_at_5_std
value: 10.8415
- type: nauc_recall_at_5_diff1
value: 41.4249
- type: nauc_recall_at_10_max
value: 51.6187
- type: nauc_recall_at_10_std
value: 13.4603
- type: nauc_recall_at_10_diff1
value: 39.8822
- type: nauc_recall_at_20_max
value: 52.849500000000006
- type: nauc_recall_at_20_std
value: 14.3943
- type: nauc_recall_at_20_diff1
value: 38.2481
- type: nauc_recall_at_100_max
value: 60.366699999999994
- type: nauc_recall_at_100_std
value: 34.2108
- type: nauc_recall_at_100_diff1
value: 38.5689
- type: nauc_recall_at_1000_max
value: 59.54429999999999
- type: nauc_recall_at_1000_std
value: 57.35059999999999
- type: nauc_recall_at_1000_diff1
value: 30.331999999999997
- type: nauc_precision_at_1_max
value: 47.3997
- type: nauc_precision_at_1_std
value: 4.6415
- type: nauc_precision_at_1_diff1
value: 55.2295
- type: nauc_precision_at_3_max
value: 53.295899999999996
- type: nauc_precision_at_3_std
value: 11.0735
- type: nauc_precision_at_3_diff1
value: 45.2698
- type: nauc_precision_at_5_max
value: 51.4516
- type: nauc_precision_at_5_std
value: 10.8415
- type: nauc_precision_at_5_diff1
value: 41.4249
- type: nauc_precision_at_10_max
value: 51.6187
- type: nauc_precision_at_10_std
value: 13.4603
- type: nauc_precision_at_10_diff1
value: 39.8822
- type: nauc_precision_at_20_max
value: 52.849500000000006
- type: nauc_precision_at_20_std
value: 14.3943
- type: nauc_precision_at_20_diff1
value: 38.2481
- type: nauc_precision_at_100_max
value: 60.366699999999994
- type: nauc_precision_at_100_std
value: 34.2108
- type: nauc_precision_at_100_diff1
value: 38.5689
- type: nauc_precision_at_1000_max
value: 59.54429999999999
- type: nauc_precision_at_1000_std
value: 57.35059999999999
- type: nauc_precision_at_1000_diff1
value: 30.331999999999997
- type: nauc_mrr_at_1_max
value: 47.3997
- type: nauc_mrr_at_1_std
value: 4.6415
- type: nauc_mrr_at_1_diff1
value: 55.2295
- type: nauc_mrr_at_3_max
value: 50.33879999999999
- type: nauc_mrr_at_3_std
value: 8.0053
- type: nauc_mrr_at_3_diff1
value: 50.4792
- type: nauc_mrr_at_5_max
value: 49.7955
- type: nauc_mrr_at_5_std
value: 7.7969
- type: nauc_mrr_at_5_diff1
value: 49.6566
- type: nauc_mrr_at_10_max
value: 49.7532
- type: nauc_mrr_at_10_std
value: 8.032300000000001
- type: nauc_mrr_at_10_diff1
value: 49.548500000000004
- type: nauc_mrr_at_20_max
value: 49.8138
- type: nauc_mrr_at_20_std
value: 8.0091
- type: nauc_mrr_at_20_diff1
value: 49.5634
- type: nauc_mrr_at_100_max
value: 49.8475
- type: nauc_mrr_at_100_std
value: 8.132399999999999
- type: nauc_mrr_at_100_diff1
value: 49.6456
- type: nauc_mrr_at_1000_max
value: 49.830600000000004
- type: nauc_mrr_at_1000_std
value: 8.0998
- type: nauc_mrr_at_1000_diff1
value: 49.6603
- type: main_score
value: 44.619
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (vie-ara)
type: facebook/mlqa
config: vie-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 27.699
- type: ndcg_at_3
value: 35.978
- type: ndcg_at_5
value: 38.494
- type: ndcg_at_10
value: 41.17
- type: ndcg_at_20
value: 43.34
- type: ndcg_at_100
value: 46.44
- type: ndcg_at_1000
value: 48.534
- type: map_at_1
value: 27.699
- type: map_at_3
value: 33.928000000000004
- type: map_at_5
value: 35.325
- type: map_at_10
value: 36.433
- type: map_at_20
value: 37.033
- type: map_at_100
value: 37.46
- type: map_at_1000
value: 37.536
- type: recall_at_1
value: 27.699
- type: recall_at_3
value: 41.915
- type: recall_at_5
value: 48.021
- type: recall_at_10
value: 56.277
- type: recall_at_20
value: 64.827
- type: recall_at_100
value: 81.583
- type: recall_at_1000
value: 98.241
- type: precision_at_1
value: 27.699
- type: precision_at_3
value: 13.972000000000001
- type: precision_at_5
value: 9.604
- type: precision_at_10
value: 5.628
- type: precision_at_20
value: 3.241
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.098
- type: mrr_at_1
value: 27.699099999999998
- type: mrr_at_3
value: 33.9277
- type: mrr_at_5
value: 35.3249
- type: mrr_at_10
value: 36.433
- type: mrr_at_20
value: 37.033
- type: mrr_at_100
value: 37.460300000000004
- type: mrr_at_1000
value: 37.5364
- type: nauc_ndcg_at_1_max
value: 47.9902
- type: nauc_ndcg_at_1_std
value: 11.7877
- type: nauc_ndcg_at_1_diff1
value: 53.30009999999999
- type: nauc_ndcg_at_3_max
value: 48.7976
- type: nauc_ndcg_at_3_std
value: 14.285700000000002
- type: nauc_ndcg_at_3_diff1
value: 44.9715
- type: nauc_ndcg_at_5_max
value: 48.1773
- type: nauc_ndcg_at_5_std
value: 15.2027
- type: nauc_ndcg_at_5_diff1
value: 42.6697
- type: nauc_ndcg_at_10_max
value: 47.9669
- type: nauc_ndcg_at_10_std
value: 16.245
- type: nauc_ndcg_at_10_diff1
value: 41.7466
- type: nauc_ndcg_at_20_max
value: 47.5711
- type: nauc_ndcg_at_20_std
value: 16.6753
- type: nauc_ndcg_at_20_diff1
value: 41.3274
- type: nauc_ndcg_at_100_max
value: 48.157
- type: nauc_ndcg_at_100_std
value: 17.7415
- type: nauc_ndcg_at_100_diff1
value: 41.8455
- type: nauc_ndcg_at_1000_max
value: 48.0416
- type: nauc_ndcg_at_1000_std
value: 16.4432
- type: nauc_ndcg_at_1000_diff1
value: 42.96
- type: nauc_map_at_1_max
value: 47.9902
- type: nauc_map_at_1_std
value: 11.7877
- type: nauc_map_at_1_diff1
value: 53.30009999999999
- type: nauc_map_at_3_max
value: 48.605399999999996
- type: nauc_map_at_3_std
value: 13.7193
- type: nauc_map_at_3_diff1
value: 46.8232
- type: nauc_map_at_5_max
value: 48.2739
- type: nauc_map_at_5_std
value: 14.2215
- type: nauc_map_at_5_diff1
value: 45.5511
- type: nauc_map_at_10_max
value: 48.2171
- type: nauc_map_at_10_std
value: 14.6616
- type: nauc_map_at_10_diff1
value: 45.204699999999995
- type: nauc_map_at_20_max
value: 48.086600000000004
- type: nauc_map_at_20_std
value: 14.745700000000001
- type: nauc_map_at_20_diff1
value: 45.112
- type: nauc_map_at_100_max
value: 48.1655
- type: nauc_map_at_100_std
value: 14.8883
- type: nauc_map_at_100_diff1
value: 45.1828
- type: nauc_map_at_1000_max
value: 48.1632
- type: nauc_map_at_1000_std
value: 14.8524
- type: nauc_map_at_1000_diff1
value: 45.2272
- type: nauc_recall_at_1_max
value: 47.9902
- type: nauc_recall_at_1_std
value: 11.7877
- type: nauc_recall_at_1_diff1
value: 53.30009999999999
- type: nauc_recall_at_3_max
value: 49.332
- type: nauc_recall_at_3_std
value: 15.8498
- type: nauc_recall_at_3_diff1
value: 39.8739
- type: nauc_recall_at_5_max
value: 47.7993
- type: nauc_recall_at_5_std
value: 18.0993
- type: nauc_recall_at_5_diff1
value: 34.257
- type: nauc_recall_at_10_max
value: 46.940599999999996
- type: nauc_recall_at_10_std
value: 21.529
- type: nauc_recall_at_10_diff1
value: 30.6398
- type: nauc_recall_at_20_max
value: 45.2487
- type: nauc_recall_at_20_std
value: 24.376900000000003
- type: nauc_recall_at_20_diff1
value: 27.269199999999998
- type: nauc_recall_at_100_max
value: 49.290800000000004
- type: nauc_recall_at_100_std
value: 38.9228
- type: nauc_recall_at_100_diff1
value: 23.7152
- type: nauc_recall_at_1000_max
value: 43.8731
- type: nauc_recall_at_1000_std
value: 45.7342
- type: nauc_recall_at_1000_diff1
value: 7.1701
- type: nauc_precision_at_1_max
value: 47.9902
- type: nauc_precision_at_1_std
value: 11.7877
- type: nauc_precision_at_1_diff1
value: 53.30009999999999
- type: nauc_precision_at_3_max
value: 49.332
- type: nauc_precision_at_3_std
value: 15.8498
- type: nauc_precision_at_3_diff1
value: 39.8739
- type: nauc_precision_at_5_max
value: 47.7993
- type: nauc_precision_at_5_std
value: 18.0993
- type: nauc_precision_at_5_diff1
value: 34.257
- type: nauc_precision_at_10_max
value: 46.940599999999996
- type: nauc_precision_at_10_std
value: 21.529
- type: nauc_precision_at_10_diff1
value: 30.6398
- type: nauc_precision_at_20_max
value: 45.2487
- type: nauc_precision_at_20_std
value: 24.376900000000003
- type: nauc_precision_at_20_diff1
value: 27.269199999999998
- type: nauc_precision_at_100_max
value: 49.290800000000004
- type: nauc_precision_at_100_std
value: 38.9228
- type: nauc_precision_at_100_diff1
value: 23.7152
- type: nauc_precision_at_1000_max
value: 43.8731
- type: nauc_precision_at_1000_std
value: 45.7342
- type: nauc_precision_at_1000_diff1
value: 7.1701
- type: nauc_mrr_at_1_max
value: 47.9902
- type: nauc_mrr_at_1_std
value: 11.7877
- type: nauc_mrr_at_1_diff1
value: 53.30009999999999
- type: nauc_mrr_at_3_max
value: 48.605399999999996
- type: nauc_mrr_at_3_std
value: 13.7193
- type: nauc_mrr_at_3_diff1
value: 46.8232
- type: nauc_mrr_at_5_max
value: 48.2739
- type: nauc_mrr_at_5_std
value: 14.2215
- type: nauc_mrr_at_5_diff1
value: 45.5511
- type: nauc_mrr_at_10_max
value: 48.2171
- type: nauc_mrr_at_10_std
value: 14.6616
- type: nauc_mrr_at_10_diff1
value: 45.204699999999995
- type: nauc_mrr_at_20_max
value: 48.086600000000004
- type: nauc_mrr_at_20_std
value: 14.745700000000001
- type: nauc_mrr_at_20_diff1
value: 45.112
- type: nauc_mrr_at_100_max
value: 48.1655
- type: nauc_mrr_at_100_std
value: 14.8883
- type: nauc_mrr_at_100_diff1
value: 45.1828
- type: nauc_mrr_at_1000_max
value: 48.1632
- type: nauc_mrr_at_1000_std
value: 14.8524
- type: nauc_mrr_at_1000_diff1
value: 45.2272
- type: main_score
value: 41.17
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (zho-ara)
type: facebook/mlqa
config: zho-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 30.455
- type: ndcg_at_3
value: 38.614
- type: ndcg_at_5
value: 40.693
- type: ndcg_at_10
value: 43.523
- type: ndcg_at_20
value: 45.651
- type: ndcg_at_100
value: 48.756
- type: ndcg_at_1000
value: 50.637
- type: map_at_1
value: 30.455
- type: map_at_3
value: 36.620999999999995
- type: map_at_5
value: 37.78
- type: map_at_10
value: 38.951
- type: map_at_20
value: 39.543
- type: map_at_100
value: 39.956
- type: map_at_1000
value: 40.022000000000006
- type: recall_at_1
value: 30.455
- type: recall_at_3
value: 44.375
- type: recall_at_5
value: 49.397999999999996
- type: recall_at_10
value: 58.13700000000001
- type: recall_at_20
value: 66.484
- type: recall_at_100
value: 83.438
- type: recall_at_1000
value: 98.482
- type: precision_at_1
value: 30.455
- type: precision_at_3
value: 14.792
- type: precision_at_5
value: 9.879999999999999
- type: precision_at_10
value: 5.814
- type: precision_at_20
value: 3.325
- type: precision_at_100
value: 0.835
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 30.4553
- type: mrr_at_3
value: 36.6213
- type: mrr_at_5
value: 37.7804
- type: mrr_at_10
value: 38.9508
- type: mrr_at_20
value: 39.5449
- type: mrr_at_100
value: 39.9577
- type: mrr_at_1000
value: 40.0238
- type: nauc_ndcg_at_1_max
value: 48.8898
- type: nauc_ndcg_at_1_std
value: 9.9853
- type: nauc_ndcg_at_1_diff1
value: 55.1585
- type: nauc_ndcg_at_3_max
value: 49.0008
- type: nauc_ndcg_at_3_std
value: 11.089599999999999
- type: nauc_ndcg_at_3_diff1
value: 47.700900000000004
- type: nauc_ndcg_at_5_max
value: 49.5803
- type: nauc_ndcg_at_5_std
value: 12.378599999999999
- type: nauc_ndcg_at_5_diff1
value: 46.9606
- type: nauc_ndcg_at_10_max
value: 49.1348
- type: nauc_ndcg_at_10_std
value: 12.696399999999999
- type: nauc_ndcg_at_10_diff1
value: 45.731
- type: nauc_ndcg_at_20_max
value: 49.6612
- type: nauc_ndcg_at_20_std
value: 14.3148
- type: nauc_ndcg_at_20_diff1
value: 44.9405
- type: nauc_ndcg_at_100_max
value: 49.8074
- type: nauc_ndcg_at_100_std
value: 15.1201
- type: nauc_ndcg_at_100_diff1
value: 45.420899999999996
- type: nauc_ndcg_at_1000_max
value: 49.5773
- type: nauc_ndcg_at_1000_std
value: 13.7904
- type: nauc_ndcg_at_1000_diff1
value: 46.5471
- type: nauc_map_at_1_max
value: 48.8898
- type: nauc_map_at_1_std
value: 9.9853
- type: nauc_map_at_1_diff1
value: 55.1585
- type: nauc_map_at_3_max
value: 48.9727
- type: nauc_map_at_3_std
value: 10.807500000000001
- type: nauc_map_at_3_diff1
value: 49.3725
- type: nauc_map_at_5_max
value: 49.2652
- type: nauc_map_at_5_std
value: 11.5037
- type: nauc_map_at_5_diff1
value: 48.9742
- type: nauc_map_at_10_max
value: 49.0863
- type: nauc_map_at_10_std
value: 11.6191
- type: nauc_map_at_10_diff1
value: 48.4889
- type: nauc_map_at_20_max
value: 49.2315
- type: nauc_map_at_20_std
value: 12.0546
- type: nauc_map_at_20_diff1
value: 48.3074
- type: nauc_map_at_100_max
value: 49.2415
- type: nauc_map_at_100_std
value: 12.133099999999999
- type: nauc_map_at_100_diff1
value: 48.398799999999994
- type: nauc_map_at_1000_max
value: 49.2308
- type: nauc_map_at_1000_std
value: 12.0927
- type: nauc_map_at_1000_diff1
value: 48.4355
- type: nauc_recall_at_1_max
value: 48.8898
- type: nauc_recall_at_1_std
value: 9.9853
- type: nauc_recall_at_1_diff1
value: 55.1585
- type: nauc_recall_at_3_max
value: 49.0815
- type: nauc_recall_at_3_std
value: 11.9015
- type: nauc_recall_at_3_diff1
value: 42.9785
- type: nauc_recall_at_5_max
value: 50.611399999999996
- type: nauc_recall_at_5_std
value: 15.122399999999999
- type: nauc_recall_at_5_diff1
value: 41.073
- type: nauc_recall_at_10_max
value: 49.2098
- type: nauc_recall_at_10_std
value: 16.4463
- type: nauc_recall_at_10_diff1
value: 36.525
- type: nauc_recall_at_20_max
value: 51.6409
- type: nauc_recall_at_20_std
value: 24.4586
- type: nauc_recall_at_20_diff1
value: 31.394899999999996
- type: nauc_recall_at_100_max
value: 54.785399999999996
- type: nauc_recall_at_100_std
value: 40.8177
- type: nauc_recall_at_100_diff1
value: 25.7955
- type: nauc_recall_at_1000_max
value: 70.33070000000001
- type: nauc_recall_at_1000_std
value: 71.0309
- type: nauc_recall_at_1000_diff1
value: 17.0748
- type: nauc_precision_at_1_max
value: 48.8898
- type: nauc_precision_at_1_std
value: 9.9853
- type: nauc_precision_at_1_diff1
value: 55.1585
- type: nauc_precision_at_3_max
value: 49.0815
- type: nauc_precision_at_3_std
value: 11.9015
- type: nauc_precision_at_3_diff1
value: 42.9785
- type: nauc_precision_at_5_max
value: 50.611399999999996
- type: nauc_precision_at_5_std
value: 15.122399999999999
- type: nauc_precision_at_5_diff1
value: 41.073
- type: nauc_precision_at_10_max
value: 49.2098
- type: nauc_precision_at_10_std
value: 16.4463
- type: nauc_precision_at_10_diff1
value: 36.525
- type: nauc_precision_at_20_max
value: 51.6
- type: nauc_precision_at_20_std
value: 24.4193
- type: nauc_precision_at_20_diff1
value: 31.3295
- type: nauc_precision_at_100_max
value: 54.744400000000006
- type: nauc_precision_at_100_std
value: 40.7844
- type: nauc_precision_at_100_diff1
value: 25.687900000000003
- type: nauc_precision_at_1000_max
value: 63.998200000000004
- type: nauc_precision_at_1000_std
value: 65.2054
- type: nauc_precision_at_1000_diff1
value: 13.280100000000001
- type: nauc_mrr_at_1_max
value: 48.8898
- type: nauc_mrr_at_1_std
value: 9.9853
- type: nauc_mrr_at_1_diff1
value: 55.1585
- type: nauc_mrr_at_3_max
value: 48.9727
- type: nauc_mrr_at_3_std
value: 10.807500000000001
- type: nauc_mrr_at_3_diff1
value: 49.3725
- type: nauc_mrr_at_5_max
value: 49.2652
- type: nauc_mrr_at_5_std
value: 11.5037
- type: nauc_mrr_at_5_diff1
value: 48.9742
- type: nauc_mrr_at_10_max
value: 49.0863
- type: nauc_mrr_at_10_std
value: 11.6191
- type: nauc_mrr_at_10_diff1
value: 48.4889
- type: nauc_mrr_at_20_max
value: 49.229299999999995
- type: nauc_mrr_at_20_std
value: 12.0523
- type: nauc_mrr_at_20_diff1
value: 48.3045
- type: nauc_mrr_at_100_max
value: 49.2394
- type: nauc_mrr_at_100_std
value: 12.1308
- type: nauc_mrr_at_100_diff1
value: 48.396
- type: nauc_mrr_at_1000_max
value: 49.228699999999996
- type: nauc_mrr_at_1000_std
value: 12.090399999999999
- type: nauc_mrr_at_1000_diff1
value: 48.4328
- type: main_score
value: 43.523
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (ar)
type: jinaai/mintakaqa
config: ar
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: ndcg_at_1
value: 8.761
- type: ndcg_at_3
value: 12.867
- type: ndcg_at_5
value: 14.322
- type: ndcg_at_10
value: 16.1
- type: ndcg_at_20
value: 17.693
- type: ndcg_at_100
value: 20.48
- type: ndcg_at_1000
value: 25.629999999999995
- type: map_at_1
value: 8.761
- type: map_at_3
value: 11.855
- type: map_at_5
value: 12.661
- type: map_at_10
value: 13.395999999999999
- type: map_at_20
value: 13.838000000000001
- type: map_at_100
value: 14.202
- type: map_at_1000
value: 14.344999999999999
- type: recall_at_1
value: 8.761
- type: recall_at_3
value: 15.797
- type: recall_at_5
value: 19.337
- type: recall_at_10
value: 24.83
- type: recall_at_20
value: 31.094
- type: recall_at_100
value: 46.437
- type: recall_at_1000
value: 90.059
- type: precision_at_1
value: 8.761
- type: precision_at_3
value: 5.266
- type: precision_at_5
value: 3.8670000000000004
- type: precision_at_10
value: 2.483
- type: precision_at_20
value: 1.555
- type: precision_at_100
value: 0.464
- type: precision_at_1000
value: 0.09
- type: mrr_at_1
value: 8.7608
- type: mrr_at_3
value: 11.855
- type: mrr_at_5
value: 12.6608
- type: mrr_at_10
value: 13.3959
- type: mrr_at_20
value: 13.838000000000001
- type: mrr_at_100
value: 14.2024
- type: mrr_at_1000
value: 14.345099999999999
- type: nauc_ndcg_at_1_max
value: 21.6864
- type: nauc_ndcg_at_1_std
value: 28.610200000000003
- type: nauc_ndcg_at_1_diff1
value: 20.9846
- type: nauc_ndcg_at_3_max
value: 20.477400000000003
- type: nauc_ndcg_at_3_std
value: 27.073999999999998
- type: nauc_ndcg_at_3_diff1
value: 12.8415
- type: nauc_ndcg_at_5_max
value: 19.3812
- type: nauc_ndcg_at_5_std
value: 25.2471
- type: nauc_ndcg_at_5_diff1
value: 11.6586
- type: nauc_ndcg_at_10_max
value: 19.3229
- type: nauc_ndcg_at_10_std
value: 25.6876
- type: nauc_ndcg_at_10_diff1
value: 10.7103
- type: nauc_ndcg_at_20_max
value: 18.872
- type: nauc_ndcg_at_20_std
value: 25.363000000000003
- type: nauc_ndcg_at_20_diff1
value: 9.721499999999999
- type: nauc_ndcg_at_100_max
value: 18.7914
- type: nauc_ndcg_at_100_std
value: 24.9771
- type: nauc_ndcg_at_100_diff1
value: 9.564300000000001
- type: nauc_ndcg_at_1000_max
value: 19.5652
- type: nauc_ndcg_at_1000_std
value: 24.713099999999997
- type: nauc_ndcg_at_1000_diff1
value: 10.9607
- type: nauc_map_at_1_max
value: 21.6864
- type: nauc_map_at_1_std
value: 28.610200000000003
- type: nauc_map_at_1_diff1
value: 20.9846
- type: nauc_map_at_3_max
value: 20.8068
- type: nauc_map_at_3_std
value: 27.277
- type: nauc_map_at_3_diff1
value: 14.511299999999999
- type: nauc_map_at_5_max
value: 20.0835
- type: nauc_map_at_5_std
value: 26.131300000000003
- type: nauc_map_at_5_diff1
value: 13.6857
- type: nauc_map_at_10_max
value: 20.0281
- type: nauc_map_at_10_std
value: 26.2996
- type: nauc_map_at_10_diff1
value: 13.192300000000001
- type: nauc_map_at_20_max
value: 19.8456
- type: nauc_map_at_20_std
value: 26.1681
- type: nauc_map_at_20_diff1
value: 12.8234
- type: nauc_map_at_100_max
value: 19.7798
- type: nauc_map_at_100_std
value: 26.096999999999998
- type: nauc_map_at_100_diff1
value: 12.7576
- type: nauc_map_at_1000_max
value: 19.804
- type: nauc_map_at_1000_std
value: 26.0808
- type: nauc_map_at_1000_diff1
value: 12.8081
- type: nauc_recall_at_1_max
value: 21.6864
- type: nauc_recall_at_1_std
value: 28.610200000000003
- type: nauc_recall_at_1_diff1
value: 20.9846
- type: nauc_recall_at_3_max
value: 19.6883
- type: nauc_recall_at_3_std
value: 26.6378
- type: nauc_recall_at_3_diff1
value: 8.9681
- type: nauc_recall_at_5_max
value: 17.8277
- type: nauc_recall_at_5_std
value: 23.2801
- type: nauc_recall_at_5_diff1
value: 7.352200000000001
- type: nauc_recall_at_10_max
value: 17.9106
- type: nauc_recall_at_10_std
value: 24.556
- type: nauc_recall_at_10_diff1
value: 5.6874
- type: nauc_recall_at_20_max
value: 16.950699999999998
- type: nauc_recall_at_20_std
value: 23.874000000000002
- type: nauc_recall_at_20_diff1
value: 3.562
- type: nauc_recall_at_100_max
value: 17.147000000000002
- type: nauc_recall_at_100_std
value: 22.5333
- type: nauc_recall_at_100_diff1
value: 3.4271999999999996
- type: nauc_recall_at_1000_max
value: 27.553499999999996
- type: nauc_recall_at_1000_std
value: 13.8395
- type: nauc_recall_at_1000_diff1
value: 12.9968
- type: nauc_precision_at_1_max
value: 21.6864
- type: nauc_precision_at_1_std
value: 28.610200000000003
- type: nauc_precision_at_1_diff1
value: 20.9846
- type: nauc_precision_at_3_max
value: 19.6883
- type: nauc_precision_at_3_std
value: 26.6378
- type: nauc_precision_at_3_diff1
value: 8.9681
- type: nauc_precision_at_5_max
value: 17.8277
- type: nauc_precision_at_5_std
value: 23.2801
- type: nauc_precision_at_5_diff1
value: 7.352200000000001
- type: nauc_precision_at_10_max
value: 17.9106
- type: nauc_precision_at_10_std
value: 24.556
- type: nauc_precision_at_10_diff1
value: 5.6874
- type: nauc_precision_at_20_max
value: 16.950699999999998
- type: nauc_precision_at_20_std
value: 23.874000000000002
- type: nauc_precision_at_20_diff1
value: 3.562
- type: nauc_precision_at_100_max
value: 17.147000000000002
- type: nauc_precision_at_100_std
value: 22.5333
- type: nauc_precision_at_100_diff1
value: 3.4271999999999996
- type: nauc_precision_at_1000_max
value: 27.553499999999996
- type: nauc_precision_at_1000_std
value: 13.8395
- type: nauc_precision_at_1000_diff1
value: 12.9968
- type: nauc_mrr_at_1_max
value: 21.6864
- type: nauc_mrr_at_1_std
value: 28.610200000000003
- type: nauc_mrr_at_1_diff1
value: 20.9846
- type: nauc_mrr_at_3_max
value: 20.8068
- type: nauc_mrr_at_3_std
value: 27.277
- type: nauc_mrr_at_3_diff1
value: 14.511299999999999
- type: nauc_mrr_at_5_max
value: 20.0835
- type: nauc_mrr_at_5_std
value: 26.131300000000003
- type: nauc_mrr_at_5_diff1
value: 13.6857
- type: nauc_mrr_at_10_max
value: 20.0281
- type: nauc_mrr_at_10_std
value: 26.2996
- type: nauc_mrr_at_10_diff1
value: 13.192300000000001
- type: nauc_mrr_at_20_max
value: 19.8456
- type: nauc_mrr_at_20_std
value: 26.1681
- type: nauc_mrr_at_20_diff1
value: 12.8234
- type: nauc_mrr_at_100_max
value: 19.7798
- type: nauc_mrr_at_100_std
value: 26.096999999999998
- type: nauc_mrr_at_100_diff1
value: 12.7576
- type: nauc_mrr_at_1000_max
value: 19.804
- type: nauc_mrr_at_1000_std
value: 26.0808
- type: nauc_mrr_at_1000_diff1
value: 12.8081
- type: main_score
value: 16.1
- task:
type: Retrieval
dataset:
name: MTEB MrTidyRetrieval (arabic)
type: mteb/mrtidy
config: arabic
split: test
revision: fc24a3ce8f09746410daee3d5cd823ff7a0675b7
metrics:
- type: ndcg_at_1
value: 14.338999999999999
- type: ndcg_at_3
value: 20.278
- type: ndcg_at_5
value: 23.035
- type: ndcg_at_10
value: 25.934
- type: ndcg_at_20
value: 27.68
- type: ndcg_at_100
value: 30.685000000000002
- type: ndcg_at_1000
value: 32.926
- type: map_at_1
value: 13.228000000000002
- type: map_at_3
value: 18.301000000000002
- type: map_at_5
value: 19.830000000000002
- type: map_at_10
value: 21.038
- type: map_at_20
value: 21.538
- type: map_at_100
value: 21.977
- type: map_at_1000
value: 22.066
- type: recall_at_1
value: 13.228000000000002
- type: recall_at_3
value: 24.792
- type: recall_at_5
value: 31.298
- type: recall_at_10
value: 39.948
- type: recall_at_20
value: 46.546
- type: recall_at_100
value: 61.949
- type: recall_at_1000
value: 79.001
- type: precision_at_1
value: 14.338999999999999
- type: precision_at_3
value: 9.035
- type: precision_at_5
value: 6.883
- type: precision_at_10
value: 4.44
- type: precision_at_20
value: 2.5989999999999998
- type: precision_at_100
value: 0.7080000000000001
- type: precision_at_1000
value: 0.091
- type: mrr_at_1
value: 14.338600000000001
- type: mrr_at_3
value: 19.5652
- type: mrr_at_5
value: 21.1517
- type: mrr_at_10
value: 22.3876
- type: mrr_at_20
value: 22.8831
- type: mrr_at_100
value: 23.2868
- type: mrr_at_1000
value: 23.359199999999998
- type: nauc_ndcg_at_1_max
value: 12.350800000000001
- type: nauc_ndcg_at_1_std
value: 10.1704
- type: nauc_ndcg_at_1_diff1
value: 19.557199999999998
- type: nauc_ndcg_at_3_max
value: 16.4692
- type: nauc_ndcg_at_3_std
value: 12.4419
- type: nauc_ndcg_at_3_diff1
value: 18.2343
- type: nauc_ndcg_at_5_max
value: 17.1079
- type: nauc_ndcg_at_5_std
value: 14.7839
- type: nauc_ndcg_at_5_diff1
value: 17.9067
- type: nauc_ndcg_at_10_max
value: 17.6668
- type: nauc_ndcg_at_10_std
value: 17.6519
- type: nauc_ndcg_at_10_diff1
value: 17.1885
- type: nauc_ndcg_at_20_max
value: 18.017
- type: nauc_ndcg_at_20_std
value: 19.1385
- type: nauc_ndcg_at_20_diff1
value: 16.5595
- type: nauc_ndcg_at_100_max
value: 17.7476
- type: nauc_ndcg_at_100_std
value: 20.1949
- type: nauc_ndcg_at_100_diff1
value: 16.3128
- type: nauc_ndcg_at_1000_max
value: 17.799799999999998
- type: nauc_ndcg_at_1000_std
value: 20.5006
- type: nauc_ndcg_at_1000_diff1
value: 16.4148
- type: nauc_map_at_1_max
value: 12.4058
- type: nauc_map_at_1_std
value: 11.1723
- type: nauc_map_at_1_diff1
value: 20.7625
- type: nauc_map_at_3_max
value: 15.609300000000001
- type: nauc_map_at_3_std
value: 12.2595
- type: nauc_map_at_3_diff1
value: 18.8335
- type: nauc_map_at_5_max
value: 16.1361
- type: nauc_map_at_5_std
value: 13.8137
- type: nauc_map_at_5_diff1
value: 18.712300000000003
- type: nauc_map_at_10_max
value: 16.4222
- type: nauc_map_at_10_std
value: 15.059600000000001
- type: nauc_map_at_10_diff1
value: 18.3989
- type: nauc_map_at_20_max
value: 16.563200000000002
- type: nauc_map_at_20_std
value: 15.549299999999999
- type: nauc_map_at_20_diff1
value: 18.205299999999998
- type: nauc_map_at_100_max
value: 16.498099999999997
- type: nauc_map_at_100_std
value: 15.735199999999999
- type: nauc_map_at_100_diff1
value: 18.098300000000002
- type: nauc_map_at_1000_max
value: 16.4922
- type: nauc_map_at_1000_std
value: 15.7561
- type: nauc_map_at_1000_diff1
value: 18.124100000000002
- type: nauc_recall_at_1_max
value: 12.4058
- type: nauc_recall_at_1_std
value: 11.1723
- type: nauc_recall_at_1_diff1
value: 20.7625
- type: nauc_recall_at_3_max
value: 18.3013
- type: nauc_recall_at_3_std
value: 12.954699999999999
- type: nauc_recall_at_3_diff1
value: 16.9722
- type: nauc_recall_at_5_max
value: 19.309
- type: nauc_recall_at_5_std
value: 17.3374
- type: nauc_recall_at_5_diff1
value: 16.314
- type: nauc_recall_at_10_max
value: 20.6932
- type: nauc_recall_at_10_std
value: 24.299799999999998
- type: nauc_recall_at_10_diff1
value: 14.666799999999999
- type: nauc_recall_at_20_max
value: 21.8139
- type: nauc_recall_at_20_std
value: 28.881400000000003
- type: nauc_recall_at_20_diff1
value: 12.928899999999999
- type: nauc_recall_at_100_max
value: 20.8015
- type: nauc_recall_at_100_std
value: 34.943999999999996
- type: nauc_recall_at_100_diff1
value: 11.6233
- type: nauc_recall_at_1000_max
value: 24.131800000000002
- type: nauc_recall_at_1000_std
value: 45.778200000000005
- type: nauc_recall_at_1000_diff1
value: 9.0989
- type: nauc_precision_at_1_max
value: 12.350800000000001
- type: nauc_precision_at_1_std
value: 10.1704
- type: nauc_precision_at_1_diff1
value: 19.557199999999998
- type: nauc_precision_at_3_max
value: 18.6388
- type: nauc_precision_at_3_std
value: 11.9733
- type: nauc_precision_at_3_diff1
value: 16.4002
- type: nauc_precision_at_5_max
value: 19.988400000000002
- type: nauc_precision_at_5_std
value: 17.020599999999998
- type: nauc_precision_at_5_diff1
value: 15.4553
- type: nauc_precision_at_10_max
value: 21.029
- type: nauc_precision_at_10_std
value: 24.0445
- type: nauc_precision_at_10_diff1
value: 12.7804
- type: nauc_precision_at_20_max
value: 20.8578
- type: nauc_precision_at_20_std
value: 27.8364
- type: nauc_precision_at_20_diff1
value: 10.0575
- type: nauc_precision_at_100_max
value: 19.115
- type: nauc_precision_at_100_std
value: 30.4435
- type: nauc_precision_at_100_diff1
value: 6.2284
- type: nauc_precision_at_1000_max
value: 14.213899999999999
- type: nauc_precision_at_1000_std
value: 27.5515
- type: nauc_precision_at_1000_diff1
value: 1.3398
- type: nauc_mrr_at_1_max
value: 12.350800000000001
- type: nauc_mrr_at_1_std
value: 10.1704
- type: nauc_mrr_at_1_diff1
value: 19.557199999999998
- type: nauc_mrr_at_3_max
value: 15.576799999999999
- type: nauc_mrr_at_3_std
value: 11.9021
- type: nauc_mrr_at_3_diff1
value: 18.185599999999997
- type: nauc_mrr_at_5_max
value: 15.615699999999999
- type: nauc_mrr_at_5_std
value: 12.9917
- type: nauc_mrr_at_5_diff1
value: 17.8173
- type: nauc_mrr_at_10_max
value: 15.7163
- type: nauc_mrr_at_10_std
value: 14.2755
- type: nauc_mrr_at_10_diff1
value: 17.4754
- type: nauc_mrr_at_20_max
value: 15.8022
- type: nauc_mrr_at_20_std
value: 14.69
- type: nauc_mrr_at_20_diff1
value: 17.201900000000002
- type: nauc_mrr_at_100_max
value: 15.767000000000001
- type: nauc_mrr_at_100_std
value: 14.8459
- type: nauc_mrr_at_100_diff1
value: 17.2406
- type: nauc_mrr_at_1000_max
value: 15.778400000000001
- type: nauc_mrr_at_1000_std
value: 14.8592
- type: nauc_mrr_at_1000_diff1
value: 17.2675
- type: main_score
value: 25.934
- task:
type: Retrieval
dataset:
name: MTEB SadeemQuestionRetrieval (default)
type: sadeem-ai/sadeem-ar-eval-retrieval-questions
config: default
split: test
revision: 3cb0752b182e5d5d740df547748b06663c8e0bd9
metrics:
- type: ndcg_at_1
value: 25.945
- type: ndcg_at_3
value: 55.796
- type: ndcg_at_5
value: 57.726
- type: ndcg_at_10
value: 58.884
- type: ndcg_at_20
value: 59.705
- type: ndcg_at_100
value: 60.659
- type: ndcg_at_1000
value: 61.151999999999994
- type: map_at_1
value: 25.945
- type: map_at_3
value: 47.981
- type: map_at_5
value: 49.051
- type: map_at_10
value: 49.536
- type: map_at_20
value: 49.767
- type: map_at_100
value: 49.9
- type: map_at_1000
value: 49.916
- type: recall_at_1
value: 25.945
- type: recall_at_3
value: 78.602
- type: recall_at_5
value: 83.29299999999999
- type: recall_at_10
value: 86.836
- type: recall_at_20
value: 90.04299999999999
- type: recall_at_100
value: 95.165
- type: recall_at_1000
value: 99.138
- type: precision_at_1
value: 25.945
- type: precision_at_3
value: 26.201
- type: precision_at_5
value: 16.659
- type: precision_at_10
value: 8.684
- type: precision_at_20
value: 4.502
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 24.3179
- type: mrr_at_3
value: 46.8566
- type: mrr_at_5
value: 47.9288
- type: mrr_at_10
value: 48.4848
- type: mrr_at_20
value: 48.700700000000005
- type: mrr_at_100
value: 48.8358
- type: mrr_at_1000
value: 48.8521
- type: nauc_ndcg_at_1_max
value: 15.6065
- type: nauc_ndcg_at_1_std
value: 4.1895
- type: nauc_ndcg_at_1_diff1
value: -5.9052
- type: nauc_ndcg_at_3_max
value: 35.0009
- type: nauc_ndcg_at_3_std
value: 12.2065
- type: nauc_ndcg_at_3_diff1
value: -49.336600000000004
- type: nauc_ndcg_at_5_max
value: 33.3652
- type: nauc_ndcg_at_5_std
value: 12.2193
- type: nauc_ndcg_at_5_diff1
value: -43.4435
- type: nauc_ndcg_at_10_max
value: 31.9907
- type: nauc_ndcg_at_10_std
value: 12.9051
- type: nauc_ndcg_at_10_diff1
value: -41.2196
- type: nauc_ndcg_at_20_max
value: 30.653000000000002
- type: nauc_ndcg_at_20_std
value: 14.0403
- type: nauc_ndcg_at_20_diff1
value: -38.6306
- type: nauc_ndcg_at_100_max
value: 29.307499999999997
- type: nauc_ndcg_at_100_std
value: 12.8583
- type: nauc_ndcg_at_100_diff1
value: -35.8193
- type: nauc_ndcg_at_1000_max
value: 28.833399999999997
- type: nauc_ndcg_at_1000_std
value: 12.0671
- type: nauc_ndcg_at_1000_diff1
value: -34.3451
- type: nauc_map_at_1_max
value: 15.6065
- type: nauc_map_at_1_std
value: 4.1895
- type: nauc_map_at_1_diff1
value: -5.9052
- type: nauc_map_at_3_max
value: 28.6012
- type: nauc_map_at_3_std
value: 9.6436
- type: nauc_map_at_3_diff1
value: -34.6364
- type: nauc_map_at_5_max
value: 27.581699999999998
- type: nauc_map_at_5_std
value: 9.5477
- type: nauc_map_at_5_diff1
value: -31.2154
- type: nauc_map_at_10_max
value: 27.005699999999997
- type: nauc_map_at_10_std
value: 9.7735
- type: nauc_map_at_10_diff1
value: -30.2406
- type: nauc_map_at_20_max
value: 26.6504
- type: nauc_map_at_20_std
value: 10.044400000000001
- type: nauc_map_at_20_diff1
value: -29.523300000000003
- type: nauc_map_at_100_max
value: 26.4772
- type: nauc_map_at_100_std
value: 9.8956
- type: nauc_map_at_100_diff1
value: -29.164
- type: nauc_map_at_1000_max
value: 26.460800000000003
- type: nauc_map_at_1000_std
value: 9.8771
- type: nauc_map_at_1000_diff1
value: -29.119099999999996
- type: nauc_recall_at_1_max
value: 15.6065
- type: nauc_recall_at_1_std
value: 4.1895
- type: nauc_recall_at_1_diff1
value: -5.9052
- type: nauc_recall_at_3_max
value: 62.232200000000006
- type: nauc_recall_at_3_std
value: 23.0712
- type: nauc_recall_at_3_diff1
value: -112.0696
- type: nauc_recall_at_5_max
value: 62.732600000000005
- type: nauc_recall_at_5_std
value: 25.924500000000002
- type: nauc_recall_at_5_diff1
value: -105.32390000000001
- type: nauc_recall_at_10_max
value: 61.8591
- type: nauc_recall_at_10_std
value: 32.929700000000004
- type: nauc_recall_at_10_diff1
value: -107.3419
- type: nauc_recall_at_20_max
value: 58.1697
- type: nauc_recall_at_20_std
value: 48.2999
- type: nauc_recall_at_20_diff1
value: -102.9417
- type: nauc_recall_at_100_max
value: 54.3349
- type: nauc_recall_at_100_std
value: 55.2788
- type: nauc_recall_at_100_diff1
value: -101.90060000000001
- type: nauc_recall_at_1000_max
value: 77.6378
- type: nauc_recall_at_1000_std
value: 82.6629
- type: nauc_recall_at_1000_diff1
value: -109.45089999999999
- type: nauc_precision_at_1_max
value: 15.6065
- type: nauc_precision_at_1_std
value: 4.1895
- type: nauc_precision_at_1_diff1
value: -5.9052
- type: nauc_precision_at_3_max
value: 62.232200000000006
- type: nauc_precision_at_3_std
value: 23.0712
- type: nauc_precision_at_3_diff1
value: -112.0696
- type: nauc_precision_at_5_max
value: 62.732600000000005
- type: nauc_precision_at_5_std
value: 25.924500000000002
- type: nauc_precision_at_5_diff1
value: -105.32390000000001
- type: nauc_precision_at_10_max
value: 61.8591
- type: nauc_precision_at_10_std
value: 32.929700000000004
- type: nauc_precision_at_10_diff1
value: -107.3419
- type: nauc_precision_at_20_max
value: 58.1697
- type: nauc_precision_at_20_std
value: 48.2999
- type: nauc_precision_at_20_diff1
value: -102.9417
- type: nauc_precision_at_100_max
value: 54.3349
- type: nauc_precision_at_100_std
value: 55.2788
- type: nauc_precision_at_100_diff1
value: -101.90060000000001
- type: nauc_precision_at_1000_max
value: 77.6378
- type: nauc_precision_at_1000_std
value: 82.6629
- type: nauc_precision_at_1000_diff1
value: -109.45089999999999
- type: nauc_mrr_at_1_max
value: 15.4767
- type: nauc_mrr_at_1_std
value: 7.9148
- type: nauc_mrr_at_1_diff1
value: -28.0379
- type: nauc_mrr_at_3_max
value: 29.0395
- type: nauc_mrr_at_3_std
value: 13.347700000000001
- type: nauc_mrr_at_3_diff1
value: -51.603
- type: nauc_mrr_at_5_max
value: 27.9939
- type: nauc_mrr_at_5_std
value: 12.8712
- type: nauc_mrr_at_5_diff1
value: -48.4563
- type: nauc_mrr_at_10_max
value: 27.2858
- type: nauc_mrr_at_10_std
value: 13.2486
- type: nauc_mrr_at_10_diff1
value: -47.4786
- type: nauc_mrr_at_20_max
value: 26.9478
- type: nauc_mrr_at_20_std
value: 13.571
- type: nauc_mrr_at_20_diff1
value: -46.9807
- type: nauc_mrr_at_100_max
value: 26.7688
- type: nauc_mrr_at_100_std
value: 13.439200000000001
- type: nauc_mrr_at_100_diff1
value: -46.7007
- type: nauc_mrr_at_1000_max
value: 26.753
- type: nauc_mrr_at_1000_std
value: 13.4243
- type: nauc_mrr_at_1000_diff1
value: -46.6676
- type: main_score
value: 58.884
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (ara-ara)
type: jinaai/xpqa
config: ara-ara
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 25.467000000000002
- type: ndcg_at_3
value: 26.25
- type: ndcg_at_5
value: 27.809
- type: ndcg_at_10
value: 31.296000000000003
- type: ndcg_at_20
value: 34.087
- type: ndcg_at_100
value: 38.891999999999996
- type: ndcg_at_1000
value: 42.423
- type: map_at_1
value: 13.042000000000002
- type: map_at_3
value: 20.979999999999997
- type: map_at_5
value: 23.64
- type: map_at_10
value: 25.463
- type: map_at_20
value: 26.443
- type: map_at_100
value: 27.328000000000003
- type: map_at_1000
value: 27.492
- type: recall_at_1
value: 13.042000000000002
- type: recall_at_3
value: 25.271
- type: recall_at_5
value: 31.740000000000002
- type: recall_at_10
value: 40.613
- type: recall_at_20
value: 49.689
- type: recall_at_100
value: 71.569
- type: recall_at_1000
value: 96.387
- type: precision_at_1
value: 25.467000000000002
- type: precision_at_3
value: 18.178
- type: precision_at_5
value: 14.052999999999999
- type: precision_at_10
value: 8.973
- type: precision_at_20
value: 5.427
- type: precision_at_100
value: 1.521
- type: precision_at_1000
value: 0.19499999999999998
- type: mrr_at_1
value: 25.466699999999996
- type: mrr_at_3
value: 30.177799999999998
- type: mrr_at_5
value: 31.477800000000002
- type: mrr_at_10
value: 32.626
- type: mrr_at_20
value: 33.2774
- type: mrr_at_100
value: 33.732800000000005
- type: mrr_at_1000
value: 33.8177
- type: nauc_ndcg_at_1_max
value: 22.4447
- type: nauc_ndcg_at_1_std
value: -12.8273
- type: nauc_ndcg_at_1_diff1
value: 30.6643
- type: nauc_ndcg_at_3_max
value: 21.8156
- type: nauc_ndcg_at_3_std
value: -7.678599999999999
- type: nauc_ndcg_at_3_diff1
value: 24.3589
- type: nauc_ndcg_at_5_max
value: 22.3372
- type: nauc_ndcg_at_5_std
value: -6.578
- type: nauc_ndcg_at_5_diff1
value: 24.3558
- type: nauc_ndcg_at_10_max
value: 24.249399999999998
- type: nauc_ndcg_at_10_std
value: -5.4608
- type: nauc_ndcg_at_10_diff1
value: 25.0826
- type: nauc_ndcg_at_20_max
value: 25.1081
- type: nauc_ndcg_at_20_std
value: -4.4616999999999996
- type: nauc_ndcg_at_20_diff1
value: 25.4926
- type: nauc_ndcg_at_100_max
value: 24.9943
- type: nauc_ndcg_at_100_std
value: -2.9071
- type: nauc_ndcg_at_100_diff1
value: 25.0587
- type: nauc_ndcg_at_1000_max
value: 24.9393
- type: nauc_ndcg_at_1000_std
value: -3.9886
- type: nauc_ndcg_at_1000_diff1
value: 24.9149
- type: nauc_map_at_1_max
value: 10.3874
- type: nauc_map_at_1_std
value: -14.1189
- type: nauc_map_at_1_diff1
value: 27.1204
- type: nauc_map_at_3_max
value: 19.1887
- type: nauc_map_at_3_std
value: -8.7622
- type: nauc_map_at_3_diff1
value: 23.968400000000003
- type: nauc_map_at_5_max
value: 22.1726
- type: nauc_map_at_5_std
value: -7.8292
- type: nauc_map_at_5_diff1
value: 24.8012
- type: nauc_map_at_10_max
value: 23.4288
- type: nauc_map_at_10_std
value: -7.4127
- type: nauc_map_at_10_diff1
value: 25.507800000000003
- type: nauc_map_at_20_max
value: 23.7292
- type: nauc_map_at_20_std
value: -7.187200000000001
- type: nauc_map_at_20_diff1
value: 25.7249
- type: nauc_map_at_100_max
value: 23.5909
- type: nauc_map_at_100_std
value: -6.9328
- type: nauc_map_at_100_diff1
value: 25.4793
- type: nauc_map_at_1000_max
value: 23.6015
- type: nauc_map_at_1000_std
value: -6.9618
- type: nauc_map_at_1000_diff1
value: 25.4933
- type: nauc_recall_at_1_max
value: 10.3874
- type: nauc_recall_at_1_std
value: -14.1189
- type: nauc_recall_at_1_diff1
value: 27.1204
- type: nauc_recall_at_3_max
value: 17.793400000000002
- type: nauc_recall_at_3_std
value: -3.7499
- type: nauc_recall_at_3_diff1
value: 17.6262
- type: nauc_recall_at_5_max
value: 21.038899999999998
- type: nauc_recall_at_5_std
value: -1.8713
- type: nauc_recall_at_5_diff1
value: 19.7434
- type: nauc_recall_at_10_max
value: 24.9692
- type: nauc_recall_at_10_std
value: 1.053
- type: nauc_recall_at_10_diff1
value: 21.2845
- type: nauc_recall_at_20_max
value: 27.9293
- type: nauc_recall_at_20_std
value: 4.7705
- type: nauc_recall_at_20_diff1
value: 22.1695
- type: nauc_recall_at_100_max
value: 29.4898
- type: nauc_recall_at_100_std
value: 16.903000000000002
- type: nauc_recall_at_100_diff1
value: 21.1503
- type: nauc_recall_at_1000_max
value: 61.8728
- type: nauc_recall_at_1000_std
value: 63.785599999999995
- type: nauc_recall_at_1000_diff1
value: 4.887
- type: nauc_precision_at_1_max
value: 22.4447
- type: nauc_precision_at_1_std
value: -12.8273
- type: nauc_precision_at_1_diff1
value: 30.6643
- type: nauc_precision_at_3_max
value: 27.930899999999998
- type: nauc_precision_at_3_std
value: -5.6785000000000005
- type: nauc_precision_at_3_diff1
value: 22.5772
- type: nauc_precision_at_5_max
value: 29.625200000000003
- type: nauc_precision_at_5_std
value: -3.949
- type: nauc_precision_at_5_diff1
value: 22.569200000000002
- type: nauc_precision_at_10_max
value: 30.353
- type: nauc_precision_at_10_std
value: -2.6828000000000003
- type: nauc_precision_at_10_diff1
value: 22.0195
- type: nauc_precision_at_20_max
value: 29.3013
- type: nauc_precision_at_20_std
value: -0.9629000000000001
- type: nauc_precision_at_20_diff1
value: 21.473100000000002
- type: nauc_precision_at_100_max
value: 24.3825
- type: nauc_precision_at_100_std
value: 2.3911000000000002
- type: nauc_precision_at_100_diff1
value: 15.606300000000001
- type: nauc_precision_at_1000_max
value: 18.7938
- type: nauc_precision_at_1000_std
value: -0.1033
- type: nauc_precision_at_1000_diff1
value: 9.300799999999999
- type: nauc_mrr_at_1_max
value: 22.4447
- type: nauc_mrr_at_1_std
value: -12.8273
- type: nauc_mrr_at_1_diff1
value: 30.6643
- type: nauc_mrr_at_3_max
value: 21.898300000000003
- type: nauc_mrr_at_3_std
value: -9.1679
- type: nauc_mrr_at_3_diff1
value: 26.647900000000003
- type: nauc_mrr_at_5_max
value: 21.7943
- type: nauc_mrr_at_5_std
value: -8.9716
- type: nauc_mrr_at_5_diff1
value: 26.8466
- type: nauc_mrr_at_10_max
value: 22.4361
- type: nauc_mrr_at_10_std
value: -8.288
- type: nauc_mrr_at_10_diff1
value: 26.8214
- type: nauc_mrr_at_20_max
value: 22.6388
- type: nauc_mrr_at_20_std
value: -7.9011
- type: nauc_mrr_at_20_diff1
value: 26.842899999999997
- type: nauc_mrr_at_100_max
value: 22.6039
- type: nauc_mrr_at_100_std
value: -7.7958
- type: nauc_mrr_at_100_diff1
value: 26.847199999999997
- type: nauc_mrr_at_1000_max
value: 22.5934
- type: nauc_mrr_at_1000_std
value: -7.8259
- type: nauc_mrr_at_1000_diff1
value: 26.8426
- type: main_score
value: 31.296000000000003
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (eng-ara)
type: jinaai/xpqa
config: eng-ara
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 13.467
- type: ndcg_at_3
value: 14.322
- type: ndcg_at_5
value: 15.528
- type: ndcg_at_10
value: 18.358
- type: ndcg_at_20
value: 20.73
- type: ndcg_at_100
value: 25.879
- type: ndcg_at_1000
value: 31.326999999999998
- type: map_at_1
value: 6.622
- type: map_at_3
value: 10.791
- type: map_at_5
value: 12.337
- type: map_at_10
value: 13.682
- type: map_at_20
value: 14.438999999999998
- type: map_at_100
value: 15.292
- type: map_at_1000
value: 15.545
- type: recall_at_1
value: 6.622
- type: recall_at_3
value: 13.862
- type: recall_at_5
value: 18.389
- type: recall_at_10
value: 25.578
- type: recall_at_20
value: 33.416000000000004
- type: recall_at_100
value: 56.938
- type: recall_at_1000
value: 93.982
- type: precision_at_1
value: 13.467
- type: precision_at_3
value: 10.133000000000001
- type: precision_at_5
value: 8.16
- type: precision_at_10
value: 5.627
- type: precision_at_20
value: 3.627
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.194
- type: mrr_at_1
value: 13.466700000000001
- type: mrr_at_3
value: 17.488899999999997
- type: mrr_at_5
value: 18.7222
- type: mrr_at_10
value: 19.905900000000003
- type: mrr_at_20
value: 20.4778
- type: mrr_at_100
value: 21.023
- type: mrr_at_1000
value: 21.1478
- type: nauc_ndcg_at_1_max
value: 21.769
- type: nauc_ndcg_at_1_std
value: 2.4559
- type: nauc_ndcg_at_1_diff1
value: 22.7686
- type: nauc_ndcg_at_3_max
value: 24.3857
- type: nauc_ndcg_at_3_std
value: 5.9556
- type: nauc_ndcg_at_3_diff1
value: 22.3492
- type: nauc_ndcg_at_5_max
value: 25.810100000000002
- type: nauc_ndcg_at_5_std
value: 6.325799999999999
- type: nauc_ndcg_at_5_diff1
value: 21.993
- type: nauc_ndcg_at_10_max
value: 26.6969
- type: nauc_ndcg_at_10_std
value: 7.2925
- type: nauc_ndcg_at_10_diff1
value: 21.3312
- type: nauc_ndcg_at_20_max
value: 26.652900000000002
- type: nauc_ndcg_at_20_std
value: 7.271
- type: nauc_ndcg_at_20_diff1
value: 21.4505
- type: nauc_ndcg_at_100_max
value: 27.418300000000002
- type: nauc_ndcg_at_100_std
value: 9.1853
- type: nauc_ndcg_at_100_diff1
value: 21.0781
- type: nauc_ndcg_at_1000_max
value: 26.5394
- type: nauc_ndcg_at_1000_std
value: 8.4966
- type: nauc_ndcg_at_1000_diff1
value: 20.2687
- type: nauc_map_at_1_max
value: 21.621499999999997
- type: nauc_map_at_1_std
value: 6.7188
- type: nauc_map_at_1_diff1
value: 28.6267
- type: nauc_map_at_3_max
value: 24.7587
- type: nauc_map_at_3_std
value: 7.5144
- type: nauc_map_at_3_diff1
value: 24.7211
- type: nauc_map_at_5_max
value: 26.5481
- type: nauc_map_at_5_std
value: 6.7313
- type: nauc_map_at_5_diff1
value: 24.5343
- type: nauc_map_at_10_max
value: 26.962199999999996
- type: nauc_map_at_10_std
value: 7.3188
- type: nauc_map_at_10_diff1
value: 23.6207
- type: nauc_map_at_20_max
value: 27.009
- type: nauc_map_at_20_std
value: 7.2947999999999995
- type: nauc_map_at_20_diff1
value: 23.4863
- type: nauc_map_at_100_max
value: 27.185399999999998
- type: nauc_map_at_100_std
value: 7.5737
- type: nauc_map_at_100_diff1
value: 23.543
- type: nauc_map_at_1000_max
value: 27.1341
- type: nauc_map_at_1000_std
value: 7.5804
- type: nauc_map_at_1000_diff1
value: 23.494999999999997
- type: nauc_recall_at_1_max
value: 21.621499999999997
- type: nauc_recall_at_1_std
value: 6.7188
- type: nauc_recall_at_1_diff1
value: 28.6267
- type: nauc_recall_at_3_max
value: 23.969099999999997
- type: nauc_recall_at_3_std
value: 8.4769
- type: nauc_recall_at_3_diff1
value: 20.115
- type: nauc_recall_at_5_max
value: 25.155499999999996
- type: nauc_recall_at_5_std
value: 6.4667
- type: nauc_recall_at_5_diff1
value: 18.6197
- type: nauc_recall_at_10_max
value: 26.3774
- type: nauc_recall_at_10_std
value: 8.262799999999999
- type: nauc_recall_at_10_diff1
value: 17.7344
- type: nauc_recall_at_20_max
value: 25.6955
- type: nauc_recall_at_20_std
value: 8.1547
- type: nauc_recall_at_20_diff1
value: 18.0549
- type: nauc_recall_at_100_max
value: 28.3794
- type: nauc_recall_at_100_std
value: 16.8501
- type: nauc_recall_at_100_diff1
value: 14.7472
- type: nauc_recall_at_1000_max
value: 35.3088
- type: nauc_recall_at_1000_std
value: 34.5591
- type: nauc_recall_at_1000_diff1
value: -14.508099999999999
- type: nauc_precision_at_1_max
value: 21.769
- type: nauc_precision_at_1_std
value: 2.4559
- type: nauc_precision_at_1_diff1
value: 22.7686
- type: nauc_precision_at_3_max
value: 25.005100000000002
- type: nauc_precision_at_3_std
value: 3.7567000000000004
- type: nauc_precision_at_3_diff1
value: 20.7241
- type: nauc_precision_at_5_max
value: 27.572200000000002
- type: nauc_precision_at_5_std
value: 3.6336
- type: nauc_precision_at_5_diff1
value: 19.896
- type: nauc_precision_at_10_max
value: 27.253800000000002
- type: nauc_precision_at_10_std
value: 4.561599999999999
- type: nauc_precision_at_10_diff1
value: 16.7525
- type: nauc_precision_at_20_max
value: 25.235400000000002
- type: nauc_precision_at_20_std
value: 3.9741
- type: nauc_precision_at_20_diff1
value: 15.7945
- type: nauc_precision_at_100_max
value: 20.383100000000002
- type: nauc_precision_at_100_std
value: 4.2147
- type: nauc_precision_at_100_diff1
value: 13.3018
- type: nauc_precision_at_1000_max
value: 6.3098
- type: nauc_precision_at_1000_std
value: -1.7795999999999998
- type: nauc_precision_at_1000_diff1
value: 3.7354
- type: nauc_mrr_at_1_max
value: 21.769
- type: nauc_mrr_at_1_std
value: 2.4559
- type: nauc_mrr_at_1_diff1
value: 22.7686
- type: nauc_mrr_at_3_max
value: 22.3842
- type: nauc_mrr_at_3_std
value: 4.4822
- type: nauc_mrr_at_3_diff1
value: 19.708000000000002
- type: nauc_mrr_at_5_max
value: 22.7469
- type: nauc_mrr_at_5_std
value: 4.8326
- type: nauc_mrr_at_5_diff1
value: 19.5886
- type: nauc_mrr_at_10_max
value: 23.2992
- type: nauc_mrr_at_10_std
value: 5.2336
- type: nauc_mrr_at_10_diff1
value: 19.7147
- type: nauc_mrr_at_20_max
value: 23.244699999999998
- type: nauc_mrr_at_20_std
value: 5.2174
- type: nauc_mrr_at_20_diff1
value: 19.808600000000002
- type: nauc_mrr_at_100_max
value: 23.3962
- type: nauc_mrr_at_100_std
value: 5.4528
- type: nauc_mrr_at_100_diff1
value: 19.799
- type: nauc_mrr_at_1000_max
value: 23.386699999999998
- type: nauc_mrr_at_1000_std
value: 5.432
- type: nauc_mrr_at_1000_diff1
value: 19.7846
- type: main_score
value: 18.358
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (ara-eng)
type: jinaai/xpqa
config: ara-eng
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 19.811
- type: ndcg_at_3
value: 21.506
- type: ndcg_at_5
value: 23.173
- type: ndcg_at_10
value: 26.913999999999998
- type: ndcg_at_20
value: 29.970000000000002
- type: ndcg_at_100
value: 35.274
- type: ndcg_at_1000
value: 39.164
- type: map_at_1
value: 11.013
- type: map_at_3
value: 17.051
- type: map_at_5
value: 19.209
- type: map_at_10
value: 21.105
- type: map_at_20
value: 22.189
- type: map_at_100
value: 23.143
- type: map_at_1000
value: 23.34
- type: recall_at_1
value: 11.013
- type: recall_at_3
value: 21.139
- type: recall_at_5
value: 27.136
- type: recall_at_10
value: 36.649
- type: recall_at_20
value: 46.752
- type: recall_at_100
value: 70.786
- type: recall_at_1000
value: 97.04899999999999
- type: precision_at_1
value: 19.811
- type: precision_at_3
value: 14.96
- type: precision_at_5
value: 11.725
- type: precision_at_10
value: 7.857
- type: precision_at_20
value: 4.939
- type: precision_at_100
value: 1.472
- type: precision_at_1000
value: 0.198
- type: mrr_at_1
value: 19.811300000000003
- type: mrr_at_3
value: 24.8428
- type: mrr_at_5
value: 26.2916
- type: mrr_at_10
value: 27.699
- type: mrr_at_20
value: 28.3441
- type: mrr_at_100
value: 28.8789
- type: mrr_at_1000
value: 28.968
- type: nauc_ndcg_at_1_max
value: 13.658600000000002
- type: nauc_ndcg_at_1_std
value: -10.888399999999999
- type: nauc_ndcg_at_1_diff1
value: 28.503
- type: nauc_ndcg_at_3_max
value: 13.2295
- type: nauc_ndcg_at_3_std
value: -8.3667
- type: nauc_ndcg_at_3_diff1
value: 24.2478
- type: nauc_ndcg_at_5_max
value: 16.2788
- type: nauc_ndcg_at_5_std
value: -6.1103
- type: nauc_ndcg_at_5_diff1
value: 23.8149
- type: nauc_ndcg_at_10_max
value: 17.7924
- type: nauc_ndcg_at_10_std
value: -5.2757
- type: nauc_ndcg_at_10_diff1
value: 22.7064
- type: nauc_ndcg_at_20_max
value: 19.031000000000002
- type: nauc_ndcg_at_20_std
value: -4.5977
- type: nauc_ndcg_at_20_diff1
value: 22.2638
- type: nauc_ndcg_at_100_max
value: 19.7211
- type: nauc_ndcg_at_100_std
value: -2.3255000000000003
- type: nauc_ndcg_at_100_diff1
value: 21.990299999999998
- type: nauc_ndcg_at_1000_max
value: 18.959799999999998
- type: nauc_ndcg_at_1000_std
value: -3.1267000000000005
- type: nauc_ndcg_at_1000_diff1
value: 22.975
- type: nauc_map_at_1_max
value: 4.2032
- type: nauc_map_at_1_std
value: -10.4419
- type: nauc_map_at_1_diff1
value: 27.2957
- type: nauc_map_at_3_max
value: 12.0436
- type: nauc_map_at_3_std
value: -8.5909
- type: nauc_map_at_3_diff1
value: 25.1571
- type: nauc_map_at_5_max
value: 15.2261
- type: nauc_map_at_5_std
value: -7.7981
- type: nauc_map_at_5_diff1
value: 24.9448
- type: nauc_map_at_10_max
value: 15.9522
- type: nauc_map_at_10_std
value: -7.366300000000001
- type: nauc_map_at_10_diff1
value: 24.191
- type: nauc_map_at_20_max
value: 16.4523
- type: nauc_map_at_20_std
value: -7.115
- type: nauc_map_at_20_diff1
value: 23.9544
- type: nauc_map_at_100_max
value: 16.615199999999998
- type: nauc_map_at_100_std
value: -6.7194
- type: nauc_map_at_100_diff1
value: 24.024
- type: nauc_map_at_1000_max
value: 16.598
- type: nauc_map_at_1000_std
value: -6.6981
- type: nauc_map_at_1000_diff1
value: 24.077399999999997
- type: nauc_recall_at_1_max
value: 4.2032
- type: nauc_recall_at_1_std
value: -10.4419
- type: nauc_recall_at_1_diff1
value: 27.2957
- type: nauc_recall_at_3_max
value: 12.0031
- type: nauc_recall_at_3_std
value: -5.558
- type: nauc_recall_at_3_diff1
value: 21.6049
- type: nauc_recall_at_5_max
value: 18.288899999999998
- type: nauc_recall_at_5_std
value: -1.9322
- type: nauc_recall_at_5_diff1
value: 20.0738
- type: nauc_recall_at_10_max
value: 20.4263
- type: nauc_recall_at_10_std
value: -0.4483
- type: nauc_recall_at_10_diff1
value: 16.9348
- type: nauc_recall_at_20_max
value: 23.555400000000002
- type: nauc_recall_at_20_std
value: 1.7368999999999999
- type: nauc_recall_at_20_diff1
value: 15.4241
- type: nauc_recall_at_100_max
value: 28.749599999999997
- type: nauc_recall_at_100_std
value: 15.001999999999999
- type: nauc_recall_at_100_diff1
value: 10.1602
- type: nauc_recall_at_1000_max
value: 52.9767
- type: nauc_recall_at_1000_std
value: 63.133300000000006
- type: nauc_recall_at_1000_diff1
value: -8.1688
- type: nauc_precision_at_1_max
value: 13.658600000000002
- type: nauc_precision_at_1_std
value: -10.888399999999999
- type: nauc_precision_at_1_diff1
value: 28.503
- type: nauc_precision_at_3_max
value: 18.2643
- type: nauc_precision_at_3_std
value: -7.6172
- type: nauc_precision_at_3_diff1
value: 20.1407
- type: nauc_precision_at_5_max
value: 23.6899
- type: nauc_precision_at_5_std
value: -5.0431
- type: nauc_precision_at_5_diff1
value: 19.3496
- type: nauc_precision_at_10_max
value: 23.7744
- type: nauc_precision_at_10_std
value: -2.9978000000000002
- type: nauc_precision_at_10_diff1
value: 15.9886
- type: nauc_precision_at_20_max
value: 23.9516
- type: nauc_precision_at_20_std
value: -1.881
- type: nauc_precision_at_20_diff1
value: 13.858
- type: nauc_precision_at_100_max
value: 22.0491
- type: nauc_precision_at_100_std
value: 3.9923
- type: nauc_precision_at_100_diff1
value: 10.8588
- type: nauc_precision_at_1000_max
value: 15.2248
- type: nauc_precision_at_1000_std
value: 2.2651
- type: nauc_precision_at_1000_diff1
value: 8.451500000000001
- type: nauc_mrr_at_1_max
value: 13.658600000000002
- type: nauc_mrr_at_1_std
value: -10.888399999999999
- type: nauc_mrr_at_1_diff1
value: 28.503
- type: nauc_mrr_at_3_max
value: 12.0131
- type: nauc_mrr_at_3_std
value: -9.0483
- type: nauc_mrr_at_3_diff1
value: 25.1263
- type: nauc_mrr_at_5_max
value: 14.2408
- type: nauc_mrr_at_5_std
value: -7.324400000000001
- type: nauc_mrr_at_5_diff1
value: 24.4894
- type: nauc_mrr_at_10_max
value: 15.1286
- type: nauc_mrr_at_10_std
value: -6.958
- type: nauc_mrr_at_10_diff1
value: 24.5045
- type: nauc_mrr_at_20_max
value: 15.3281
- type: nauc_mrr_at_20_std
value: -6.8811
- type: nauc_mrr_at_20_diff1
value: 24.4511
- type: nauc_mrr_at_100_max
value: 15.237700000000002
- type: nauc_mrr_at_100_std
value: -6.6511000000000005
- type: nauc_mrr_at_100_diff1
value: 24.4441
- type: nauc_mrr_at_1000_max
value: 15.2116
- type: nauc_mrr_at_1000_std
value: -6.6709000000000005
- type: nauc_mrr_at_1000_diff1
value: 24.4846
- type: main_score
value: 26.913999999999998
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 81.20578037912223
- type: cosine_spearman
value: 77.43670420687278
- type: euclidean_pearson
value: 74.60444698819703
- type: euclidean_spearman
value: 72.25767053642666
- type: main_score
value: 77.43670420687278
- type: manhattan_pearson
value: 73.86951335383257
- type: manhattan_spearman
value: 71.41608509527123
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 83.11155556919923
- type: cosine_spearman
value: 79.39435627520159
- type: euclidean_pearson
value: 81.05225024180342
- type: euclidean_spearman
value: 79.09926890001618
- type: main_score
value: 79.39435627520159
- type: manhattan_pearson
value: 80.74351302609706
- type: manhattan_spearman
value: 78.826254748334
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 85.10074960888633
- type: cosine_spearman
value: 78.93043293576132
- type: euclidean_pearson
value: 84.1168219787408
- type: euclidean_spearman
value: 78.44739559202252
- type: main_score
value: 78.93043293576132
- type: manhattan_pearson
value: 83.79447841594396
- type: manhattan_spearman
value: 77.94028171700384
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 81.34459901517775
- type: cosine_spearman
value: 82.73032633919925
- type: euclidean_pearson
value: 82.83546499367434
- type: euclidean_spearman
value: 83.29701673615389
- type: main_score
value: 82.73032633919925
- type: manhattan_pearson
value: 82.63480502797324
- type: manhattan_spearman
value: 83.05016589615636
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 82.53179983763488
- type: cosine_spearman
value: 81.64974497557361
- type: euclidean_pearson
value: 83.03981070806898
- type: euclidean_spearman
value: 82.65556168300631
- type: main_score
value: 81.64974497557361
- type: manhattan_pearson
value: 82.83722360191446
- type: manhattan_spearman
value: 82.4164264119
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 86.5684162475647
- type: cosine_spearman
value: 87.62163215009723
- type: euclidean_pearson
value: 87.3068288651339
- type: euclidean_spearman
value: 88.03508640722863
- type: main_score
value: 87.62163215009723
- type: manhattan_pearson
value: 87.21818681800193
- type: manhattan_spearman
value: 87.94690511382603
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 81.70518105237446
- type: cosine_spearman
value: 83.66083698795428
- type: euclidean_pearson
value: 82.80400684544435
- type: euclidean_spearman
value: 83.39926895275799
- type: main_score
value: 83.66083698795428
- type: manhattan_pearson
value: 82.44430538731845
- type: manhattan_spearman
value: 82.99600783826028
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 82.23229967696153
- type: cosine_spearman
value: 82.40039006538706
- type: euclidean_pearson
value: 79.21322872573518
- type: euclidean_spearman
value: 79.14230529579783
- type: main_score
value: 82.40039006538706
- type: manhattan_pearson
value: 79.1476348987964
- type: manhattan_spearman
value: 78.82381660638143
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 45.95767124518871
- type: cosine_spearman
value: 51.37922888872568
- type: euclidean_pearson
value: 45.519471121310126
- type: euclidean_spearman
value: 51.45605803385654
- type: main_score
value: 51.37922888872568
- type: manhattan_pearson
value: 45.98761117909666
- type: manhattan_spearman
value: 51.48451973989366
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 85.38916827757183
- type: cosine_spearman
value: 86.16303183485594
- type: euclidean_pearson
value: 85.16406897245115
- type: euclidean_spearman
value: 85.40364087457081
- type: main_score
value: 86.16303183485594
- type: manhattan_pearson
value: 84.96853193915084
- type: manhattan_spearman
value: 85.13238442843544
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 30.077426987171158
- type: cosine_spearman
value: 30.163682020271608
- type: dot_pearson
value: 27.31125295906803
- type: dot_spearman
value: 29.138235153208193
- type: main_score
value: 30.163682020271608
- type: pearson
value: 30.077426987171158
- type: spearman
value: 30.163682020271608
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.8538831619509135
name: Pearson Cosine
- type: spearman_cosine
value: 0.861625750018802
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8496745674597512
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8513333417508545
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8516261261374778
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8540549341060195
name: Spearman Euclidean
- type: pearson_dot
value: 0.7281308266536204
name: Pearson Dot
- type: spearman_dot
value: 0.7230282720855726
name: Spearman Dot
- type: pearson_max
value: 0.8538831619509135
name: Pearson Max
- type: spearman_max
value: 0.861625750018802
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.8542379189261009
name: Pearson Cosine
- type: spearman_cosine
value: 0.8609329396560859
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8486657899695456
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8512120732504748
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8505249483849495
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8538738365440234
name: Spearman Euclidean
- type: pearson_dot
value: 0.7075618032859148
name: Pearson Dot
- type: spearman_dot
value: 0.7028728329509918
name: Spearman Dot
- type: pearson_max
value: 0.8542379189261009
name: Pearson Max
- type: spearman_max
value: 0.8609329396560859
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.8486308733045101
name: Pearson Cosine
- type: spearman_cosine
value: 0.8578681811996274
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8404506123980291
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.845565163232125
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8414758099131773
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8471566121478254
name: Spearman Euclidean
- type: pearson_dot
value: 0.6668664182302968
name: Pearson Dot
- type: spearman_dot
value: 0.6651222481800894
name: Spearman Dot
- type: pearson_max
value: 0.8486308733045101
name: Pearson Max
- type: spearman_max
value: 0.8578681811996274
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.8389761445410956
name: Pearson Cosine
- type: spearman_cosine
value: 0.8499312736457453
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8287388421834582
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8353046807483782
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8297699263897746
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8371843253238523
name: Spearman Euclidean
- type: pearson_dot
value: 0.5855876200722326
name: Pearson Dot
- type: spearman_dot
value: 0.5834920267418124
name: Spearman Dot
- type: pearson_max
value: 0.8389761445410956
name: Pearson Max
- type: spearman_max
value: 0.8499312736457453
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.8290685425698586
name: Pearson Cosine
- type: spearman_cosine
value: 0.8429054799136109
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8100968316314205
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8221121550434057
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8129044863346081
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8255133471709527
name: Spearman Euclidean
- type: pearson_dot
value: 0.5067257944655903
name: Pearson Dot
- type: spearman_dot
value: 0.5109761436588146
name: Spearman Dot
- type: pearson_max
value: 0.8290685425698586
name: Pearson Max
- type: spearman_max
value: 0.8429054799136109
name: Spearman Max
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 79f2382ceacceacdf38563d7c5d16b9ff8d725d6 -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/Arabic-Nli-Matryoshka")
# Run inference
sentences = [
'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8539 |
| **spearman_cosine** | **0.8616** |
| pearson_manhattan | 0.8497 |
| spearman_manhattan | 0.8513 |
| pearson_euclidean | 0.8516 |
| spearman_euclidean | 0.8541 |
| pearson_dot | 0.7281 |
| spearman_dot | 0.723 |
| pearson_max | 0.8539 |
| spearman_max | 0.8616 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8542 |
| **spearman_cosine** | **0.8609** |
| pearson_manhattan | 0.8487 |
| spearman_manhattan | 0.8512 |
| pearson_euclidean | 0.8505 |
| spearman_euclidean | 0.8539 |
| pearson_dot | 0.7076 |
| spearman_dot | 0.7029 |
| pearson_max | 0.8542 |
| spearman_max | 0.8609 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8486 |
| **spearman_cosine** | **0.8579** |
| pearson_manhattan | 0.8405 |
| spearman_manhattan | 0.8456 |
| pearson_euclidean | 0.8415 |
| spearman_euclidean | 0.8472 |
| pearson_dot | 0.6669 |
| spearman_dot | 0.6651 |
| pearson_max | 0.8486 |
| spearman_max | 0.8579 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.839 |
| **spearman_cosine** | **0.8499** |
| pearson_manhattan | 0.8287 |
| spearman_manhattan | 0.8353 |
| pearson_euclidean | 0.8298 |
| spearman_euclidean | 0.8372 |
| pearson_dot | 0.5856 |
| spearman_dot | 0.5835 |
| pearson_max | 0.839 |
| spearman_max | 0.8499 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8291 |
| **spearman_cosine** | **0.8429** |
| pearson_manhattan | 0.8101 |
| spearman_manhattan | 0.8221 |
| pearson_euclidean | 0.8129 |
| spearman_euclidean | 0.8255 |
| pearson_dot | 0.5067 |
| spearman_dot | 0.511 |
| pearson_max | 0.8291 |
| spearman_max | 0.8429 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 10.33 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.21 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 15.32 tokens</li><li>max: 53 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> |
| <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> |
| <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 21.86 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.22 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.2 tokens</li><li>max: 33 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> |
| <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> |
| <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 256
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| 0.2294 | 500 | 10.1279 | - | - | - | - | - |
| 0.4587 | 1000 | 8.0384 | - | - | - | - | - |
| 0.6881 | 1500 | 7.3484 | - | - | - | - | - |
| 0.9174 | 2000 | 4.2216 | - | - | - | - | - |
| 1.0 | 2180 | - | 0.8499 | 0.8579 | 0.8609 | 0.8429 | 0.8616 |
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## <span style="color:blue">Acknowledgments</span>
The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.
```markdown
## Citation
If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:
@misc{nacar2024enhancingsemanticsimilarityunderstanding,
title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning},
author={Omer Nacar and Anis Koubaa},
year={2024},
eprint={2407.21139},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.21139},
}
| [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BIOSSES"
] | Non_BioNLP |
croissantllm/base_90k | croissantllm | text2text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705 | 1,706 | 6 | 0 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
license: mit
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (90k steps)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 90k steps (1.42 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
Coming soon
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/base_90k"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.
He is heading to the market. -> Il va au marché.
We are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
```
| [
"TRANSLATION"
] | [
"CRAFT"
] | Non_BioNLP |
RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2408.06142",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,724 | 1,724 | 885 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama3-Med42-8B - GGUF
- Model creator: https://huggingface.co/m42-health/
- Original model: https://huggingface.co/m42-health/Llama3-Med42-8B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama3-Med42-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q2_K.gguf) | Q2_K | 2.96GB |
| [Llama3-Med42-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [Llama3-Med42-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [Llama3-Med42-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [Llama3-Med42-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [Llama3-Med42-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q3_K.gguf) | Q3_K | 3.74GB |
| [Llama3-Med42-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [Llama3-Med42-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [Llama3-Med42-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [Llama3-Med42-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_0.gguf) | Q4_0 | 4.34GB |
| [Llama3-Med42-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [Llama3-Med42-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [Llama3-Med42-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_K.gguf) | Q4_K | 4.58GB |
| [Llama3-Med42-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [Llama3-Med42-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q4_1.gguf) | Q4_1 | 4.78GB |
| [Llama3-Med42-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_0.gguf) | Q5_0 | 5.21GB |
| [Llama3-Med42-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [Llama3-Med42-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_K.gguf) | Q5_K | 5.34GB |
| [Llama3-Med42-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [Llama3-Med42-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q5_1.gguf) | Q5_1 | 5.65GB |
| [Llama3-Med42-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q6_K.gguf) | Q6_K | 6.14GB |
| [Llama3-Med42-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/m42-health_-_Llama3-Med42-8B-gguf/blob/main/Llama3-Med42-8B.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
license: llama3
tags:
- m42
- health
- healthcare
- clinical-llm
pipeline_tag: text-generation
inference: false
license_name: llama3
---
# **Med42-v2 - A Suite of Clinically-aligned Large Language Models**
Med42-v2 is a suite of open-access clinical large language models (LLM) instruct and preference-tuned by M42 to expand access to medical knowledge. Built off LLaMA-3 and comprising either 8 or 70 billion parameters, these generative AI systems provide high-quality answers to medical questions.
## Key performance metrics:
- Med42-v2-70B outperforms GPT-4.0 in most of the MCQA tasks.
- Med42-v2-70B achieves a MedQA zero-shot performance of 79.10, surpassing the prior state-of-the-art among all openly available medical LLMs.
- Med42-v2-70B sits at the top of the Clinical Elo Rating Leaderboard.
|Models|Elo Score|
|:---:|:---:|
|**Med42-v2-70B**| 1764 |
|Llama3-70B-Instruct| 1643 |
|GPT4-o| 1426 |
|Llama3-8B-Instruct| 1352 |
|Mixtral-8x7b-Instruct| 970 |
|**Med42-v2-8B**| 924 |
|OpenBioLLM-70B| 657 |
|JSL-MedLlama-3-8B-v2.0| 447 |
## Limitations & Safe Use
- The Med42-v2 suite of models is not ready for real clinical use. Extensive human evaluation is undergoing as it is required to ensure safety.
- Potential for generating incorrect or harmful information.
- Risk of perpetuating biases in training data.
Use this suite of models responsibly! Do not rely on them for medical usage without rigorous safety testing.
## Model Details
*Disclaimer: This large language model is not yet ready for clinical use without further testing and validation. It should not be relied upon for making medical decisions or providing patient care.*
Beginning with Llama3 models, Med42-v2 were instruction-tuned using a dataset of ~1B tokens compiled from different open-access and high-quality sources, including medical flashcards, exam questions, and open-domain dialogues.
**Model Developers:** M42 Health AI Team
**Finetuned from model:** Llama3 - 8B & 70B Instruct
**Context length:** 8k tokens
**Input:** Text only data
**Output:** Model generates text only
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance the model's performance.
**License:** Llama 3 Community License Agreement
**Research Paper:** [Med42-v2: A Suite of Clinical LLMs](https://huggingface.co/papers/2408.06142)
## Intended Use
The Med42-v2 suite of models is being made available for further testing and assessment as AI assistants to enhance clinical decision-making and access to LLMs for healthcare use. Potential use cases include:
- Medical question answering
- Patient record summarization
- Aiding medical diagnosis
- General health Q&A
**Run the model**
You can use the 🤗 Transformers library `text-generation` pipeline to do inference.
```python
import transformers
import torch
model_name_or_path = "m42-health/Llama3-Med42-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_name_or_path,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{
"role": "system",
"content": (
"You are a helpful, respectful and honest medical assistant. You are a second version of Med42 developed by the AI team at M42, UAE. "
"Always answer as helpfully as possible, while being safe. "
"Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. "
"Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. "
"If you don’t know the answer to a question, please don’t share false information."
),
},
{"role": "user", "content": "What are the symptoms of diabetes?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=False
)
stop_tokens = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=stop_tokens,
do_sample=True,
temperature=0.4,
top_k=150,
top_p=0.75,
)
print(outputs[0]["generated_text"][len(prompt) :])
```
## Hardware and Software
The training was conducted on the NVIDIA DGX cluster with H100 GPUs, utilizing PyTorch's Fully Sharded Data Parallel (FSDP) framework.
## Evaluation Results
### Open-ended question generation
To ensure a robust evaluation of our model's output quality, we employ the LLM-as-a-Judge approach using Prometheus-8x7b-v2.0. Our assessment uses 4,000 carefully curated publicly accessible healthcare-related questions, generating responses from various models. We then use Prometheus to conduct pairwise comparisons of the answers. Drawing inspiration from the LMSYS Chatbot-Arena methodology, we present the results as Elo ratings for each model.
To maintain fairness and eliminate potential bias from prompt engineering, we used the same simple system prompt for every model throughout the evaluation process.
Below is the scoring rubric we used to prompt Prometheus to select the best answer:
```
### Score Rubric:
Which response is of higher overall quality in a medical context? Consider:
* Relevance: Does it directly address the question?
* Completeness: Does it cover all important aspects, details and subpoints?
* Safety: Does it avoid unsafe practices and address potential risks?
* Ethics: Does it maintain confidentiality and avoid biases?
* Clarity: Is it professional, clear and easy to understand?
```
#### Elo Ratings
|Models|Elo Score|
|:---:|:---:|
|**Med42-v2-70B**| 1764 |
|Llama3-70B-Instruct| 1643 |
|GPT4-o| 1426 |
|Llama3-8B-Instruct| 1352 |
|Mixtral-8x7b-Instruct| 970 |
|**Med42-v2-8B**| 924 |
|OpenBioLLM-70B| 657 |
|JSL-MedLlama-3-8B-v2.0| 447 |
#### Win-rate

### MCQA Evaluation
Med42-v2 improves performance on every clinical benchmark compared to our previous version, including MedQA, MedMCQA, USMLE, MMLU clinical topics and MMLU Pro clinical subset. For all evaluations reported so far, we use [EleutherAI's evaluation harness library](https://github.com/EleutherAI/lm-evaluation-harness) and report zero-shot accuracies (except otherwise stated). We integrated chat templates into harness and computed the likelihood for the full answer instead of only the tokens "a.", "b.", "c." or "d.".
|Model|MMLU Pro|MMLU|MedMCQA|MedQA|USMLE|
|---:|:---:|:---:|:---:|:---:|:---:|
|**Med42v2-70B**|64.36|87.12|73.20|79.10|83.80|
|**Med42v2-8B**|54.30|75.76|61.34|62.84|67.04|
|OpenBioLLM-70B|64.24|90.40|73.18|76.90|79.01|
|GPT-4.0<sup>†</sup>|-|87.00|69.50|78.90|84.05|
|MedGemini*|-|-|-|84.00|-|
|Med-PaLM-2 (5-shot)*|-|87.77|71.30|79.70|-|
|Med42|-|76.72|60.90|61.50|71.85|
|ClinicalCamel-70B|-|69.75|47.00|53.40|54.30|
|GPT-3.5<sup>†</sup>|-|66.63|50.10|50.80|53.00|
|Llama3-8B-Instruct|48.24|72.89|59.65|61.64|60.38|
|Llama3-70B-Instruct|64.24|85.99|72.03|78.88|83.57|
**For MedGemini, results are reported for MedQA without self-training and without search. We note that 0-shot performance is not reported for Med-PaLM 2. Further details can be found at [https://github.com/m42health/med42](https://github.com/m42health/med42)*.
<sup>†</sup> *Results as reported in the paper [Capabilities of GPT-4 on Medical Challenge Problems](https://www.microsoft.com/en-us/research/uploads/prod/2023/03/GPT-4_medical_benchmarks.pdf)*.
## Accessing Med42 and Reporting Issues
Please report any software "bug" or other problems through one of the following means:
- Reporting issues with the model: [https://github.com/m42health/med42](https://github.com/m42health/med42)
- Reporting risky content generated by the model, bugs and/or any security concerns: [https://forms.office.com/r/fPY4Ksecgf](https://forms.office.com/r/fPY4Ksecgf)
- M42’s privacy policy available at [https://m42.ae/privacy-policy/](https://m42.ae/privacy-policy/)
- Reporting violations of the Acceptable Use Policy or unlicensed uses of Med42: <[email protected]>
## Acknowledgements
We thank the Torch FSDP team for their robust distributed training framework, the EleutherAI harness team for their valuable evaluation tools, and the Hugging Face Alignment team for their contributions to responsible AI development.
## Citation
```
@misc{med42v2,
Author = {Cl{\'e}ment Christophe and Praveen K Kanithi and Tathagata Raha and Shadab Khan and Marco AF Pimentel},
Title = {Med42-v2: A Suite of Clinical LLMs},
Year = {2024},
Eprint = {arXiv:2408.06142},
url={https://arxiv.org/abs/2408.06142},
}
```
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"MEDQA"
] | BioNLP |
fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-484582 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-484582",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716 | 1,716 | 7 | 0 | ---
datasets:
- fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-484582
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-484582',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | [
"SCIFACT"
] | Non_BioNLP |
RcINS/gte-Qwen2-7B-instruct-Q6_K-GGUF | RcINS | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,740 | 1,740 | 23 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
# RcINS/gte-Qwen2-7B-instruct-Q6_K-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo RcINS/gte-Qwen2-7B-instruct-Q6_K-GGUF --hf-file gte-qwen2-7b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo RcINS/gte-Qwen2-7B-instruct-Q6_K-GGUF --hf-file gte-qwen2-7b-instruct-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo RcINS/gte-Qwen2-7B-instruct-Q6_K-GGUF --hf-file gte-qwen2-7b-instruct-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo RcINS/gte-Qwen2-7B-instruct-Q6_K-GGUF --hf-file gte-qwen2-7b-instruct-q6_k.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
jncraton/gte-tiny-ct2-int8 | jncraton | sentence-similarity | [
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,697 | 1,697 | 5 | 0 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: gte_tiny
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.76119402985076
- type: ap
value: 34.63659287952359
- type: f1
value: 65.88939512571113
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 86.61324999999998
- type: ap
value: 81.7476302802319
- type: f1
value: 86.5863470912001
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 42.61000000000001
- type: f1
value: 42.2217180000715
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.377999999999997
- type: map_at_10
value: 44.565
- type: map_at_100
value: 45.48
- type: map_at_1000
value: 45.487
- type: map_at_3
value: 39.841
- type: map_at_5
value: 42.284
- type: mrr_at_1
value: 29.445
- type: mrr_at_10
value: 44.956
- type: mrr_at_100
value: 45.877
- type: mrr_at_1000
value: 45.884
- type: mrr_at_3
value: 40.209
- type: mrr_at_5
value: 42.719
- type: ndcg_at_1
value: 28.377999999999997
- type: ndcg_at_10
value: 53.638
- type: ndcg_at_100
value: 57.354000000000006
- type: ndcg_at_1000
value: 57.513000000000005
- type: ndcg_at_3
value: 43.701
- type: ndcg_at_5
value: 48.114000000000004
- type: precision_at_1
value: 28.377999999999997
- type: precision_at_10
value: 8.272
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.303
- type: precision_at_5
value: 13.129
- type: recall_at_1
value: 28.377999999999997
- type: recall_at_10
value: 82.717
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 54.908
- type: recall_at_5
value: 65.647
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.637318326729876
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.01134479855804
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.82917555338909
- type: mrr
value: 74.7888361254012
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.1657730995964
- type: cos_sim_spearman
value: 86.62787748941281
- type: euclidean_pearson
value: 85.48127914481798
- type: euclidean_spearman
value: 86.48148861167424
- type: manhattan_pearson
value: 85.07496934780823
- type: manhattan_spearman
value: 86.39473964708843
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.73051948051948
- type: f1
value: 81.66368364988331
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.18623707448217
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.12697757150375
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.160000000000004
- type: map_at_10
value: 40.474
- type: map_at_100
value: 41.905
- type: map_at_1000
value: 42.041000000000004
- type: map_at_3
value: 37.147000000000006
- type: map_at_5
value: 38.873999999999995
- type: mrr_at_1
value: 36.91
- type: mrr_at_10
value: 46.495999999999995
- type: mrr_at_100
value: 47.288000000000004
- type: mrr_at_1000
value: 47.339999999999996
- type: mrr_at_3
value: 43.777
- type: mrr_at_5
value: 45.257999999999996
- type: ndcg_at_1
value: 36.91
- type: ndcg_at_10
value: 46.722
- type: ndcg_at_100
value: 51.969
- type: ndcg_at_1000
value: 54.232
- type: ndcg_at_3
value: 41.783
- type: ndcg_at_5
value: 43.797000000000004
- type: precision_at_1
value: 36.91
- type: precision_at_10
value: 9.013
- type: precision_at_100
value: 1.455
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 20.124
- type: precision_at_5
value: 14.363000000000001
- type: recall_at_1
value: 29.160000000000004
- type: recall_at_10
value: 58.521
- type: recall_at_100
value: 80.323
- type: recall_at_1000
value: 95.13000000000001
- type: recall_at_3
value: 44.205
- type: recall_at_5
value: 49.97
- type: map_at_1
value: 27.750000000000004
- type: map_at_10
value: 36.39
- type: map_at_100
value: 37.5
- type: map_at_1000
value: 37.625
- type: map_at_3
value: 33.853
- type: map_at_5
value: 35.397
- type: mrr_at_1
value: 34.14
- type: mrr_at_10
value: 41.841
- type: mrr_at_100
value: 42.469
- type: mrr_at_1000
value: 42.521
- type: mrr_at_3
value: 39.724
- type: mrr_at_5
value: 40.955999999999996
- type: ndcg_at_1
value: 34.14
- type: ndcg_at_10
value: 41.409
- type: ndcg_at_100
value: 45.668
- type: ndcg_at_1000
value: 47.916
- type: ndcg_at_3
value: 37.836
- type: ndcg_at_5
value: 39.650999999999996
- type: precision_at_1
value: 34.14
- type: precision_at_10
value: 7.739
- type: precision_at_100
value: 1.2630000000000001
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 18.217
- type: precision_at_5
value: 12.854
- type: recall_at_1
value: 27.750000000000004
- type: recall_at_10
value: 49.882
- type: recall_at_100
value: 68.556
- type: recall_at_1000
value: 83.186
- type: recall_at_3
value: 39.047
- type: recall_at_5
value: 44.458
- type: map_at_1
value: 36.879
- type: map_at_10
value: 48.878
- type: map_at_100
value: 49.918
- type: map_at_1000
value: 49.978
- type: map_at_3
value: 45.867999999999995
- type: map_at_5
value: 47.637
- type: mrr_at_1
value: 42.696
- type: mrr_at_10
value: 52.342
- type: mrr_at_100
value: 53.044000000000004
- type: mrr_at_1000
value: 53.077
- type: mrr_at_3
value: 50.01
- type: mrr_at_5
value: 51.437
- type: ndcg_at_1
value: 42.696
- type: ndcg_at_10
value: 54.469
- type: ndcg_at_100
value: 58.664
- type: ndcg_at_1000
value: 59.951
- type: ndcg_at_3
value: 49.419999999999995
- type: ndcg_at_5
value: 52.007000000000005
- type: precision_at_1
value: 42.696
- type: precision_at_10
value: 8.734
- type: precision_at_100
value: 1.1769999999999998
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 22.027
- type: precision_at_5
value: 15.135000000000002
- type: recall_at_1
value: 36.879
- type: recall_at_10
value: 67.669
- type: recall_at_100
value: 85.822
- type: recall_at_1000
value: 95.092
- type: recall_at_3
value: 54.157999999999994
- type: recall_at_5
value: 60.436
- type: map_at_1
value: 22.942
- type: map_at_10
value: 31.741999999999997
- type: map_at_100
value: 32.721000000000004
- type: map_at_1000
value: 32.809
- type: map_at_3
value: 29.17
- type: map_at_5
value: 30.714000000000002
- type: mrr_at_1
value: 24.746000000000002
- type: mrr_at_10
value: 33.517
- type: mrr_at_100
value: 34.451
- type: mrr_at_1000
value: 34.522000000000006
- type: mrr_at_3
value: 31.148999999999997
- type: mrr_at_5
value: 32.606
- type: ndcg_at_1
value: 24.746000000000002
- type: ndcg_at_10
value: 36.553000000000004
- type: ndcg_at_100
value: 41.53
- type: ndcg_at_1000
value: 43.811
- type: ndcg_at_3
value: 31.674000000000003
- type: ndcg_at_5
value: 34.241
- type: precision_at_1
value: 24.746000000000002
- type: precision_at_10
value: 5.684
- type: precision_at_100
value: 0.859
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.597000000000001
- type: precision_at_5
value: 9.672
- type: recall_at_1
value: 22.942
- type: recall_at_10
value: 49.58
- type: recall_at_100
value: 72.614
- type: recall_at_1000
value: 89.89200000000001
- type: recall_at_3
value: 36.552
- type: recall_at_5
value: 42.702
- type: map_at_1
value: 15.345
- type: map_at_10
value: 22.428
- type: map_at_100
value: 23.756
- type: map_at_1000
value: 23.872
- type: map_at_3
value: 20.212
- type: map_at_5
value: 21.291
- type: mrr_at_1
value: 19.279
- type: mrr_at_10
value: 27.1
- type: mrr_at_100
value: 28.211000000000002
- type: mrr_at_1000
value: 28.279
- type: mrr_at_3
value: 24.813
- type: mrr_at_5
value: 25.889
- type: ndcg_at_1
value: 19.279
- type: ndcg_at_10
value: 27.36
- type: ndcg_at_100
value: 33.499
- type: ndcg_at_1000
value: 36.452
- type: ndcg_at_3
value: 23.233999999999998
- type: ndcg_at_5
value: 24.806
- type: precision_at_1
value: 19.279
- type: precision_at_10
value: 5.149
- type: precision_at_100
value: 0.938
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 11.360000000000001
- type: precision_at_5
value: 8.035
- type: recall_at_1
value: 15.345
- type: recall_at_10
value: 37.974999999999994
- type: recall_at_100
value: 64.472
- type: recall_at_1000
value: 85.97200000000001
- type: recall_at_3
value: 26.203
- type: recall_at_5
value: 30.485
- type: map_at_1
value: 26.362000000000002
- type: map_at_10
value: 36.406
- type: map_at_100
value: 37.726
- type: map_at_1000
value: 37.84
- type: map_at_3
value: 33.425
- type: map_at_5
value: 35.043
- type: mrr_at_1
value: 32.146
- type: mrr_at_10
value: 41.674
- type: mrr_at_100
value: 42.478
- type: mrr_at_1000
value: 42.524
- type: mrr_at_3
value: 38.948
- type: mrr_at_5
value: 40.415
- type: ndcg_at_1
value: 32.146
- type: ndcg_at_10
value: 42.374
- type: ndcg_at_100
value: 47.919
- type: ndcg_at_1000
value: 50.013
- type: ndcg_at_3
value: 37.29
- type: ndcg_at_5
value: 39.531
- type: precision_at_1
value: 32.146
- type: precision_at_10
value: 7.767
- type: precision_at_100
value: 1.236
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 17.965999999999998
- type: precision_at_5
value: 12.742999999999999
- type: recall_at_1
value: 26.362000000000002
- type: recall_at_10
value: 54.98800000000001
- type: recall_at_100
value: 78.50200000000001
- type: recall_at_1000
value: 92.146
- type: recall_at_3
value: 40.486
- type: recall_at_5
value: 46.236
- type: map_at_1
value: 24.417
- type: map_at_10
value: 33.161
- type: map_at_100
value: 34.357
- type: map_at_1000
value: 34.473
- type: map_at_3
value: 30.245
- type: map_at_5
value: 31.541999999999998
- type: mrr_at_1
value: 29.909000000000002
- type: mrr_at_10
value: 38.211
- type: mrr_at_100
value: 39.056999999999995
- type: mrr_at_1000
value: 39.114
- type: mrr_at_3
value: 35.769
- type: mrr_at_5
value: 36.922
- type: ndcg_at_1
value: 29.909000000000002
- type: ndcg_at_10
value: 38.694
- type: ndcg_at_100
value: 44.057
- type: ndcg_at_1000
value: 46.6
- type: ndcg_at_3
value: 33.822
- type: ndcg_at_5
value: 35.454
- type: precision_at_1
value: 29.909000000000002
- type: precision_at_10
value: 7.180000000000001
- type: precision_at_100
value: 1.153
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 16.134
- type: precision_at_5
value: 11.256
- type: recall_at_1
value: 24.417
- type: recall_at_10
value: 50.260000000000005
- type: recall_at_100
value: 73.55699999999999
- type: recall_at_1000
value: 91.216
- type: recall_at_3
value: 35.971
- type: recall_at_5
value: 40.793
- type: map_at_1
value: 24.266916666666663
- type: map_at_10
value: 32.75025
- type: map_at_100
value: 33.91341666666667
- type: map_at_1000
value: 34.031749999999995
- type: map_at_3
value: 30.166416666666674
- type: map_at_5
value: 31.577000000000005
- type: mrr_at_1
value: 28.828166666666664
- type: mrr_at_10
value: 36.80991666666667
- type: mrr_at_100
value: 37.67075
- type: mrr_at_1000
value: 37.733
- type: mrr_at_3
value: 34.513416666666664
- type: mrr_at_5
value: 35.788
- type: ndcg_at_1
value: 28.828166666666664
- type: ndcg_at_10
value: 37.796
- type: ndcg_at_100
value: 42.94783333333333
- type: ndcg_at_1000
value: 45.38908333333333
- type: ndcg_at_3
value: 33.374750000000006
- type: ndcg_at_5
value: 35.379666666666665
- type: precision_at_1
value: 28.828166666666664
- type: precision_at_10
value: 6.615749999999999
- type: precision_at_100
value: 1.0848333333333333
- type: precision_at_1000
value: 0.1484166666666667
- type: precision_at_3
value: 15.347833333333332
- type: precision_at_5
value: 10.848916666666666
- type: recall_at_1
value: 24.266916666666663
- type: recall_at_10
value: 48.73458333333333
- type: recall_at_100
value: 71.56341666666667
- type: recall_at_1000
value: 88.63091666666668
- type: recall_at_3
value: 36.31208333333333
- type: recall_at_5
value: 41.55633333333333
- type: map_at_1
value: 23.497
- type: map_at_10
value: 30.249
- type: map_at_100
value: 30.947000000000003
- type: map_at_1000
value: 31.049
- type: map_at_3
value: 28.188000000000002
- type: map_at_5
value: 29.332
- type: mrr_at_1
value: 26.687
- type: mrr_at_10
value: 33.182
- type: mrr_at_100
value: 33.794999999999995
- type: mrr_at_1000
value: 33.873
- type: mrr_at_3
value: 31.263
- type: mrr_at_5
value: 32.428000000000004
- type: ndcg_at_1
value: 26.687
- type: ndcg_at_10
value: 34.252
- type: ndcg_at_100
value: 38.083
- type: ndcg_at_1000
value: 40.682
- type: ndcg_at_3
value: 30.464999999999996
- type: ndcg_at_5
value: 32.282
- type: precision_at_1
value: 26.687
- type: precision_at_10
value: 5.2909999999999995
- type: precision_at_100
value: 0.788
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.037
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.497
- type: recall_at_10
value: 43.813
- type: recall_at_100
value: 61.88399999999999
- type: recall_at_1000
value: 80.926
- type: recall_at_3
value: 33.332
- type: recall_at_5
value: 37.862
- type: map_at_1
value: 16.073
- type: map_at_10
value: 22.705000000000002
- type: map_at_100
value: 23.703
- type: map_at_1000
value: 23.833
- type: map_at_3
value: 20.593
- type: map_at_5
value: 21.7
- type: mrr_at_1
value: 19.683
- type: mrr_at_10
value: 26.39
- type: mrr_at_100
value: 27.264
- type: mrr_at_1000
value: 27.349
- type: mrr_at_3
value: 24.409
- type: mrr_at_5
value: 25.474000000000004
- type: ndcg_at_1
value: 19.683
- type: ndcg_at_10
value: 27.014
- type: ndcg_at_100
value: 31.948
- type: ndcg_at_1000
value: 35.125
- type: ndcg_at_3
value: 23.225
- type: ndcg_at_5
value: 24.866
- type: precision_at_1
value: 19.683
- type: precision_at_10
value: 4.948
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.943
- type: precision_at_5
value: 7.86
- type: recall_at_1
value: 16.073
- type: recall_at_10
value: 36.283
- type: recall_at_100
value: 58.745999999999995
- type: recall_at_1000
value: 81.711
- type: recall_at_3
value: 25.637
- type: recall_at_5
value: 29.919
- type: map_at_1
value: 25.776
- type: map_at_10
value: 33.317
- type: map_at_100
value: 34.437
- type: map_at_1000
value: 34.54
- type: map_at_3
value: 30.706
- type: map_at_5
value: 32.202999999999996
- type: mrr_at_1
value: 30.224
- type: mrr_at_10
value: 37.34
- type: mrr_at_100
value: 38.268
- type: mrr_at_1000
value: 38.335
- type: mrr_at_3
value: 35.075
- type: mrr_at_5
value: 36.348
- type: ndcg_at_1
value: 30.224
- type: ndcg_at_10
value: 38.083
- type: ndcg_at_100
value: 43.413000000000004
- type: ndcg_at_1000
value: 45.856
- type: ndcg_at_3
value: 33.437
- type: ndcg_at_5
value: 35.661
- type: precision_at_1
value: 30.224
- type: precision_at_10
value: 6.1850000000000005
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 14.646
- type: precision_at_5
value: 10.428999999999998
- type: recall_at_1
value: 25.776
- type: recall_at_10
value: 48.787000000000006
- type: recall_at_100
value: 72.04899999999999
- type: recall_at_1000
value: 89.339
- type: recall_at_3
value: 36.192
- type: recall_at_5
value: 41.665
- type: map_at_1
value: 23.156
- type: map_at_10
value: 30.886000000000003
- type: map_at_100
value: 32.551
- type: map_at_1000
value: 32.769
- type: map_at_3
value: 28.584
- type: map_at_5
value: 29.959999999999997
- type: mrr_at_1
value: 28.260999999999996
- type: mrr_at_10
value: 35.555
- type: mrr_at_100
value: 36.687
- type: mrr_at_1000
value: 36.742999999999995
- type: mrr_at_3
value: 33.531
- type: mrr_at_5
value: 34.717
- type: ndcg_at_1
value: 28.260999999999996
- type: ndcg_at_10
value: 36.036
- type: ndcg_at_100
value: 42.675000000000004
- type: ndcg_at_1000
value: 45.303
- type: ndcg_at_3
value: 32.449
- type: ndcg_at_5
value: 34.293
- type: precision_at_1
value: 28.260999999999996
- type: precision_at_10
value: 6.837999999999999
- type: precision_at_100
value: 1.4569999999999999
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 15.217
- type: precision_at_5
value: 11.028
- type: recall_at_1
value: 23.156
- type: recall_at_10
value: 45.251999999999995
- type: recall_at_100
value: 75.339
- type: recall_at_1000
value: 91.56
- type: recall_at_3
value: 34.701
- type: recall_at_5
value: 39.922999999999995
- type: map_at_1
value: 19.846
- type: map_at_10
value: 26.367
- type: map_at_100
value: 27.439999999999998
- type: map_at_1000
value: 27.552
- type: map_at_3
value: 24.006
- type: map_at_5
value: 25.230999999999998
- type: mrr_at_1
value: 21.257
- type: mrr_at_10
value: 28.071
- type: mrr_at_100
value: 29.037000000000003
- type: mrr_at_1000
value: 29.119
- type: mrr_at_3
value: 25.692999999999998
- type: mrr_at_5
value: 27.006000000000004
- type: ndcg_at_1
value: 21.257
- type: ndcg_at_10
value: 30.586000000000002
- type: ndcg_at_100
value: 35.949
- type: ndcg_at_1000
value: 38.728
- type: ndcg_at_3
value: 25.862000000000002
- type: ndcg_at_5
value: 27.967
- type: precision_at_1
value: 21.257
- type: precision_at_10
value: 4.861
- type: precision_at_100
value: 0.8130000000000001
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 10.906
- type: precision_at_5
value: 7.763000000000001
- type: recall_at_1
value: 19.846
- type: recall_at_10
value: 41.805
- type: recall_at_100
value: 66.89699999999999
- type: recall_at_1000
value: 87.401
- type: recall_at_3
value: 29.261
- type: recall_at_5
value: 34.227000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.333
- type: map_at_10
value: 17.14
- type: map_at_100
value: 18.878
- type: map_at_1000
value: 19.067
- type: map_at_3
value: 14.123
- type: map_at_5
value: 15.699
- type: mrr_at_1
value: 23.192
- type: mrr_at_10
value: 33.553
- type: mrr_at_100
value: 34.553
- type: mrr_at_1000
value: 34.603
- type: mrr_at_3
value: 29.848000000000003
- type: mrr_at_5
value: 32.18
- type: ndcg_at_1
value: 23.192
- type: ndcg_at_10
value: 24.707
- type: ndcg_at_100
value: 31.701
- type: ndcg_at_1000
value: 35.260999999999996
- type: ndcg_at_3
value: 19.492
- type: ndcg_at_5
value: 21.543
- type: precision_at_1
value: 23.192
- type: precision_at_10
value: 7.824000000000001
- type: precision_at_100
value: 1.52
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 14.180000000000001
- type: precision_at_5
value: 11.530999999999999
- type: recall_at_1
value: 10.333
- type: recall_at_10
value: 30.142999999999997
- type: recall_at_100
value: 54.298
- type: recall_at_1000
value: 74.337
- type: recall_at_3
value: 17.602999999999998
- type: recall_at_5
value: 22.938
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.03
- type: map_at_10
value: 17.345
- type: map_at_100
value: 23.462
- type: map_at_1000
value: 24.77
- type: map_at_3
value: 12.714
- type: map_at_5
value: 14.722
- type: mrr_at_1
value: 61.0
- type: mrr_at_10
value: 69.245
- type: mrr_at_100
value: 69.715
- type: mrr_at_1000
value: 69.719
- type: mrr_at_3
value: 67.583
- type: mrr_at_5
value: 68.521
- type: ndcg_at_1
value: 47.625
- type: ndcg_at_10
value: 35.973
- type: ndcg_at_100
value: 39.875
- type: ndcg_at_1000
value: 46.922000000000004
- type: ndcg_at_3
value: 40.574
- type: ndcg_at_5
value: 38.18
- type: precision_at_1
value: 61.0
- type: precision_at_10
value: 29.049999999999997
- type: precision_at_100
value: 8.828
- type: precision_at_1000
value: 1.8290000000000002
- type: precision_at_3
value: 45.333
- type: precision_at_5
value: 37.9
- type: recall_at_1
value: 8.03
- type: recall_at_10
value: 22.334
- type: recall_at_100
value: 45.919
- type: recall_at_1000
value: 68.822
- type: recall_at_3
value: 14.038999999999998
- type: recall_at_5
value: 17.118
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 44.714999999999996
- type: f1
value: 39.83929362259356
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.242999999999995
- type: map_at_10
value: 64.087
- type: map_at_100
value: 64.549
- type: map_at_1000
value: 64.567
- type: map_at_3
value: 61.667
- type: map_at_5
value: 63.266
- type: mrr_at_1
value: 56.271
- type: mrr_at_10
value: 68.146
- type: mrr_at_100
value: 68.524
- type: mrr_at_1000
value: 68.53200000000001
- type: mrr_at_3
value: 65.869
- type: mrr_at_5
value: 67.37100000000001
- type: ndcg_at_1
value: 56.271
- type: ndcg_at_10
value: 70.109
- type: ndcg_at_100
value: 72.09
- type: ndcg_at_1000
value: 72.479
- type: ndcg_at_3
value: 65.559
- type: ndcg_at_5
value: 68.242
- type: precision_at_1
value: 56.271
- type: precision_at_10
value: 9.286999999999999
- type: precision_at_100
value: 1.039
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 26.308
- type: precision_at_5
value: 17.291
- type: recall_at_1
value: 52.242999999999995
- type: recall_at_10
value: 84.71
- type: recall_at_100
value: 93.309
- type: recall_at_1000
value: 96.013
- type: recall_at_3
value: 72.554
- type: recall_at_5
value: 79.069
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.346
- type: map_at_10
value: 24.552
- type: map_at_100
value: 26.161
- type: map_at_1000
value: 26.345000000000002
- type: map_at_3
value: 21.208
- type: map_at_5
value: 22.959
- type: mrr_at_1
value: 29.166999999999998
- type: mrr_at_10
value: 38.182
- type: mrr_at_100
value: 39.22
- type: mrr_at_1000
value: 39.263
- type: mrr_at_3
value: 35.983
- type: mrr_at_5
value: 37.14
- type: ndcg_at_1
value: 29.166999999999998
- type: ndcg_at_10
value: 31.421
- type: ndcg_at_100
value: 38.129999999999995
- type: ndcg_at_1000
value: 41.569
- type: ndcg_at_3
value: 28.172000000000004
- type: ndcg_at_5
value: 29.029
- type: precision_at_1
value: 29.166999999999998
- type: precision_at_10
value: 8.997
- type: precision_at_100
value: 1.5709999999999997
- type: precision_at_1000
value: 0.22
- type: precision_at_3
value: 19.187
- type: precision_at_5
value: 13.980999999999998
- type: recall_at_1
value: 14.346
- type: recall_at_10
value: 37.963
- type: recall_at_100
value: 63.43299999999999
- type: recall_at_1000
value: 84.057
- type: recall_at_3
value: 26.119999999999997
- type: recall_at_5
value: 30.988
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.059
- type: map_at_10
value: 46.421
- type: map_at_100
value: 47.323
- type: map_at_1000
value: 47.403
- type: map_at_3
value: 43.553999999999995
- type: map_at_5
value: 45.283
- type: mrr_at_1
value: 66.117
- type: mrr_at_10
value: 73.10900000000001
- type: mrr_at_100
value: 73.444
- type: mrr_at_1000
value: 73.46000000000001
- type: mrr_at_3
value: 71.70400000000001
- type: mrr_at_5
value: 72.58099999999999
- type: ndcg_at_1
value: 66.117
- type: ndcg_at_10
value: 55.696999999999996
- type: ndcg_at_100
value: 59.167
- type: ndcg_at_1000
value: 60.809000000000005
- type: ndcg_at_3
value: 51.243
- type: ndcg_at_5
value: 53.627
- type: precision_at_1
value: 66.117
- type: precision_at_10
value: 11.538
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 31.861
- type: precision_at_5
value: 20.997
- type: recall_at_1
value: 33.059
- type: recall_at_10
value: 57.691
- type: recall_at_100
value: 71.458
- type: recall_at_1000
value: 82.35
- type: recall_at_3
value: 47.792
- type: recall_at_5
value: 52.492000000000004
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 80.544
- type: ap
value: 74.69592367984956
- type: f1
value: 80.51138138449883
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 17.095
- type: map_at_10
value: 28.038999999999998
- type: map_at_100
value: 29.246
- type: map_at_1000
value: 29.311
- type: map_at_3
value: 24.253
- type: map_at_5
value: 26.442
- type: mrr_at_1
value: 17.535999999999998
- type: mrr_at_10
value: 28.53
- type: mrr_at_100
value: 29.697000000000003
- type: mrr_at_1000
value: 29.755
- type: mrr_at_3
value: 24.779999999999998
- type: mrr_at_5
value: 26.942
- type: ndcg_at_1
value: 17.549999999999997
- type: ndcg_at_10
value: 34.514
- type: ndcg_at_100
value: 40.497
- type: ndcg_at_1000
value: 42.17
- type: ndcg_at_3
value: 26.764
- type: ndcg_at_5
value: 30.678
- type: precision_at_1
value: 17.549999999999997
- type: precision_at_10
value: 5.692
- type: precision_at_100
value: 0.8699999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 11.562
- type: precision_at_5
value: 8.917
- type: recall_at_1
value: 17.095
- type: recall_at_10
value: 54.642
- type: recall_at_100
value: 82.652
- type: recall_at_1000
value: 95.555
- type: recall_at_3
value: 33.504
- type: recall_at_5
value: 42.925000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.75558595531236
- type: f1
value: 91.25979279648296
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.90424076607387
- type: f1
value: 52.067408707562244
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.13449899125757
- type: f1
value: 67.62456762910598
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.862138533961
- type: f1
value: 74.66457222091381
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.10761942610792
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.673172170578408
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.058704977250315
- type: mrr
value: 33.24327760839221
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.163
- type: map_at_10
value: 11.652999999999999
- type: map_at_100
value: 14.849
- type: map_at_1000
value: 16.253999999999998
- type: map_at_3
value: 8.616999999999999
- type: map_at_5
value: 10.100000000000001
- type: mrr_at_1
value: 44.272
- type: mrr_at_10
value: 52.25
- type: mrr_at_100
value: 52.761
- type: mrr_at_1000
value: 52.811
- type: mrr_at_3
value: 50.31
- type: mrr_at_5
value: 51.347
- type: ndcg_at_1
value: 42.105
- type: ndcg_at_10
value: 32.044
- type: ndcg_at_100
value: 29.763
- type: ndcg_at_1000
value: 38.585
- type: ndcg_at_3
value: 36.868
- type: ndcg_at_5
value: 35.154999999999994
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 23.622
- type: precision_at_100
value: 7.7490000000000006
- type: precision_at_1000
value: 2.054
- type: precision_at_3
value: 34.262
- type: precision_at_5
value: 30.154999999999998
- type: recall_at_1
value: 5.163
- type: recall_at_10
value: 15.478
- type: recall_at_100
value: 30.424
- type: recall_at_1000
value: 62.67
- type: recall_at_3
value: 9.615
- type: recall_at_5
value: 12.369
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.618000000000002
- type: map_at_10
value: 35.465
- type: map_at_100
value: 36.712
- type: map_at_1000
value: 36.757
- type: map_at_3
value: 31.189
- type: map_at_5
value: 33.537
- type: mrr_at_1
value: 24.305
- type: mrr_at_10
value: 37.653
- type: mrr_at_100
value: 38.662
- type: mrr_at_1000
value: 38.694
- type: mrr_at_3
value: 33.889
- type: mrr_at_5
value: 35.979
- type: ndcg_at_1
value: 24.305
- type: ndcg_at_10
value: 43.028
- type: ndcg_at_100
value: 48.653999999999996
- type: ndcg_at_1000
value: 49.733
- type: ndcg_at_3
value: 34.768
- type: ndcg_at_5
value: 38.753
- type: precision_at_1
value: 24.305
- type: precision_at_10
value: 7.59
- type: precision_at_100
value: 1.076
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 16.271
- type: precision_at_5
value: 12.068
- type: recall_at_1
value: 21.618000000000002
- type: recall_at_10
value: 63.977
- type: recall_at_100
value: 89.03999999999999
- type: recall_at_1000
value: 97.10600000000001
- type: recall_at_3
value: 42.422
- type: recall_at_5
value: 51.629000000000005
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.405
- type: map_at_10
value: 83.05
- type: map_at_100
value: 83.684
- type: map_at_1000
value: 83.70400000000001
- type: map_at_3
value: 80.08800000000001
- type: map_at_5
value: 81.937
- type: mrr_at_1
value: 79.85
- type: mrr_at_10
value: 86.369
- type: mrr_at_100
value: 86.48599999999999
- type: mrr_at_1000
value: 86.48700000000001
- type: mrr_at_3
value: 85.315
- type: mrr_at_5
value: 86.044
- type: ndcg_at_1
value: 79.86999999999999
- type: ndcg_at_10
value: 87.04499999999999
- type: ndcg_at_100
value: 88.373
- type: ndcg_at_1000
value: 88.531
- type: ndcg_at_3
value: 84.04
- type: ndcg_at_5
value: 85.684
- type: precision_at_1
value: 79.86999999999999
- type: precision_at_10
value: 13.183
- type: precision_at_100
value: 1.51
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.67
- type: precision_at_5
value: 24.12
- type: recall_at_1
value: 69.405
- type: recall_at_10
value: 94.634
- type: recall_at_100
value: 99.214
- type: recall_at_1000
value: 99.958
- type: recall_at_3
value: 85.992
- type: recall_at_5
value: 90.656
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 50.191676323145465
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 56.4874020363744
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.228
- type: map_at_10
value: 11.245
- type: map_at_100
value: 13.353000000000002
- type: map_at_1000
value: 13.665
- type: map_at_3
value: 7.779999999999999
- type: map_at_5
value: 9.405
- type: mrr_at_1
value: 20.9
- type: mrr_at_10
value: 31.657999999999998
- type: mrr_at_100
value: 32.769999999999996
- type: mrr_at_1000
value: 32.833
- type: mrr_at_3
value: 28.333000000000002
- type: mrr_at_5
value: 30.043
- type: ndcg_at_1
value: 20.9
- type: ndcg_at_10
value: 19.073
- type: ndcg_at_100
value: 27.055
- type: ndcg_at_1000
value: 32.641
- type: ndcg_at_3
value: 17.483999999999998
- type: ndcg_at_5
value: 15.42
- type: precision_at_1
value: 20.9
- type: precision_at_10
value: 10.17
- type: precision_at_100
value: 2.162
- type: precision_at_1000
value: 0.35100000000000003
- type: precision_at_3
value: 16.467000000000002
- type: precision_at_5
value: 13.68
- type: recall_at_1
value: 4.228
- type: recall_at_10
value: 20.573
- type: recall_at_100
value: 43.887
- type: recall_at_1000
value: 71.22
- type: recall_at_3
value: 10.023
- type: recall_at_5
value: 13.873
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.77965135067481
- type: cos_sim_spearman
value: 75.85121335808076
- type: euclidean_pearson
value: 80.09115175262697
- type: euclidean_spearman
value: 75.72249155647123
- type: manhattan_pearson
value: 79.89723577351782
- type: manhattan_spearman
value: 75.49855259442387
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 80.46084116030949
- type: cos_sim_spearman
value: 72.57579204392951
- type: euclidean_pearson
value: 76.39020830763684
- type: euclidean_spearman
value: 72.3718627025895
- type: manhattan_pearson
value: 76.6148833027359
- type: manhattan_spearman
value: 72.57570008442319
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 80.43678068337017
- type: cos_sim_spearman
value: 82.38941154076062
- type: euclidean_pearson
value: 81.59260573633661
- type: euclidean_spearman
value: 82.31144262574114
- type: manhattan_pearson
value: 81.43266909137056
- type: manhattan_spearman
value: 82.14704293004861
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 80.73713431763163
- type: cos_sim_spearman
value: 77.97860512809388
- type: euclidean_pearson
value: 80.35755041527027
- type: euclidean_spearman
value: 78.021703511412
- type: manhattan_pearson
value: 80.24440317109162
- type: manhattan_spearman
value: 77.93165415697575
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.15111852351204
- type: cos_sim_spearman
value: 86.54032447238258
- type: euclidean_pearson
value: 86.14157021537433
- type: euclidean_spearman
value: 86.67537291929713
- type: manhattan_pearson
value: 86.081041854808
- type: manhattan_spearman
value: 86.61561701560558
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.34532445104026
- type: cos_sim_spearman
value: 83.31325001474116
- type: euclidean_pearson
value: 82.81892375201032
- type: euclidean_spearman
value: 83.4521695148055
- type: manhattan_pearson
value: 82.72503790526163
- type: manhattan_spearman
value: 83.37833652941349
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.25463453839801
- type: cos_sim_spearman
value: 88.27655263515948
- type: euclidean_pearson
value: 88.0248334411439
- type: euclidean_spearman
value: 88.18141448876868
- type: manhattan_pearson
value: 87.8080451127279
- type: manhattan_spearman
value: 88.01028114423058
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.57551045355218
- type: cos_sim_spearman
value: 66.67614095126629
- type: euclidean_pearson
value: 66.0787243112528
- type: euclidean_spearman
value: 66.83660560636939
- type: manhattan_pearson
value: 66.74684019662031
- type: manhattan_spearman
value: 67.11761598074368
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 83.70881496766829
- type: cos_sim_spearman
value: 84.37803542941634
- type: euclidean_pearson
value: 84.84501245857096
- type: euclidean_spearman
value: 84.47088079741476
- type: manhattan_pearson
value: 84.77244090794765
- type: manhattan_spearman
value: 84.43307343706205
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 81.53946254759089
- type: mrr
value: 94.68259953554072
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 51.817
- type: map_at_10
value: 62.339999999999996
- type: map_at_100
value: 62.88
- type: map_at_1000
value: 62.909000000000006
- type: map_at_3
value: 59.004
- type: map_at_5
value: 60.906000000000006
- type: mrr_at_1
value: 54.333
- type: mrr_at_10
value: 63.649
- type: mrr_at_100
value: 64.01
- type: mrr_at_1000
value: 64.039
- type: mrr_at_3
value: 61.056
- type: mrr_at_5
value: 62.639
- type: ndcg_at_1
value: 54.333
- type: ndcg_at_10
value: 67.509
- type: ndcg_at_100
value: 69.69999999999999
- type: ndcg_at_1000
value: 70.613
- type: ndcg_at_3
value: 61.729
- type: ndcg_at_5
value: 64.696
- type: precision_at_1
value: 54.333
- type: precision_at_10
value: 9.2
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 24.0
- type: precision_at_5
value: 16.2
- type: recall_at_1
value: 51.817
- type: recall_at_10
value: 82.056
- type: recall_at_100
value: 91.667
- type: recall_at_1000
value: 99.0
- type: recall_at_3
value: 66.717
- type: recall_at_5
value: 74.17200000000001
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.82475247524752
- type: cos_sim_ap
value: 95.4781199603258
- type: cos_sim_f1
value: 91.16186693147964
- type: cos_sim_precision
value: 90.53254437869822
- type: cos_sim_recall
value: 91.8
- type: dot_accuracy
value: 99.75049504950495
- type: dot_ap
value: 93.05183539809457
- type: dot_f1
value: 87.31117824773412
- type: dot_precision
value: 87.93103448275862
- type: dot_recall
value: 86.7
- type: euclidean_accuracy
value: 99.82475247524752
- type: euclidean_ap
value: 95.38547978154382
- type: euclidean_f1
value: 91.16325511732403
- type: euclidean_precision
value: 91.02691924227318
- type: euclidean_recall
value: 91.3
- type: manhattan_accuracy
value: 99.82574257425742
- type: manhattan_ap
value: 95.47237521890308
- type: manhattan_f1
value: 91.27849355797821
- type: manhattan_precision
value: 90.47151277013754
- type: manhattan_recall
value: 92.10000000000001
- type: max_accuracy
value: 99.82574257425742
- type: max_ap
value: 95.4781199603258
- type: max_f1
value: 91.27849355797821
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.542169376331245
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.74399302634387
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.65076347632749
- type: mrr
value: 50.418099057804945
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.73997756592847
- type: cos_sim_spearman
value: 29.465208011593308
- type: dot_pearson
value: 24.83735342474541
- type: dot_spearman
value: 26.005180528584855
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.208
- type: map_at_10
value: 1.434
- type: map_at_100
value: 7.829
- type: map_at_1000
value: 19.807
- type: map_at_3
value: 0.549
- type: map_at_5
value: 0.8330000000000001
- type: mrr_at_1
value: 78.0
- type: mrr_at_10
value: 85.35199999999999
- type: mrr_at_100
value: 85.673
- type: mrr_at_1000
value: 85.673
- type: mrr_at_3
value: 84.667
- type: mrr_at_5
value: 85.06700000000001
- type: ndcg_at_1
value: 72.0
- type: ndcg_at_10
value: 59.214999999999996
- type: ndcg_at_100
value: 44.681
- type: ndcg_at_1000
value: 43.035000000000004
- type: ndcg_at_3
value: 66.53099999999999
- type: ndcg_at_5
value: 63.23
- type: precision_at_1
value: 78.0
- type: precision_at_10
value: 62.4
- type: precision_at_100
value: 45.76
- type: precision_at_1000
value: 19.05
- type: precision_at_3
value: 71.333
- type: precision_at_5
value: 67.2
- type: recall_at_1
value: 0.208
- type: recall_at_10
value: 1.6580000000000001
- type: recall_at_100
value: 11.324
- type: recall_at_1000
value: 41.537
- type: recall_at_3
value: 0.579
- type: recall_at_5
value: 0.8959999999999999
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.442
- type: map_at_10
value: 8.863
- type: map_at_100
value: 14.606
- type: map_at_1000
value: 16.258
- type: map_at_3
value: 4.396
- type: map_at_5
value: 6.199000000000001
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 43.492
- type: mrr_at_100
value: 44.557
- type: mrr_at_1000
value: 44.557
- type: mrr_at_3
value: 40.816
- type: mrr_at_5
value: 42.143
- type: ndcg_at_1
value: 25.509999999999998
- type: ndcg_at_10
value: 22.076
- type: ndcg_at_100
value: 34.098
- type: ndcg_at_1000
value: 46.265
- type: ndcg_at_3
value: 24.19
- type: ndcg_at_5
value: 23.474
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.286
- type: precision_at_1000
value: 1.5310000000000001
- type: precision_at_3
value: 25.85
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.442
- type: recall_at_10
value: 15.012
- type: recall_at_100
value: 45.865
- type: recall_at_1000
value: 82.958
- type: recall_at_3
value: 5.731
- type: recall_at_5
value: 9.301
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.974
- type: ap
value: 14.534996211286682
- type: f1
value: 54.785946183399005
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 58.56819468024901
- type: f1
value: 58.92391487111204
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 43.273202335218194
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.37742146986946
- type: cos_sim_ap
value: 68.1684129575579
- type: cos_sim_f1
value: 64.93475108748189
- type: cos_sim_precision
value: 59.89745876058849
- type: cos_sim_recall
value: 70.89709762532982
- type: dot_accuracy
value: 80.49710913750968
- type: dot_ap
value: 54.699790073944186
- type: dot_f1
value: 54.45130013221684
- type: dot_precision
value: 46.74612183125236
- type: dot_recall
value: 65.19788918205805
- type: euclidean_accuracy
value: 84.5085533766466
- type: euclidean_ap
value: 68.38835695236224
- type: euclidean_f1
value: 65.3391121002694
- type: euclidean_precision
value: 58.75289656625237
- type: euclidean_recall
value: 73.58839050131925
- type: manhattan_accuracy
value: 84.40126363473803
- type: manhattan_ap
value: 68.09539181555348
- type: manhattan_f1
value: 64.99028182701653
- type: manhattan_precision
value: 60.22062134173795
- type: manhattan_recall
value: 70.58047493403694
- type: max_accuracy
value: 84.5085533766466
- type: max_ap
value: 68.38835695236224
- type: max_f1
value: 65.3391121002694
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.34167733923235
- type: cos_sim_ap
value: 84.84136381147736
- type: cos_sim_f1
value: 77.01434980904001
- type: cos_sim_precision
value: 74.27937915742794
- type: cos_sim_recall
value: 79.95842315983985
- type: dot_accuracy
value: 85.06422944075756
- type: dot_ap
value: 76.49446747522325
- type: dot_f1
value: 71.11606520830432
- type: dot_precision
value: 64.93638676844785
- type: dot_recall
value: 78.59562673236834
- type: euclidean_accuracy
value: 88.45810532852097
- type: euclidean_ap
value: 84.91526721863501
- type: euclidean_f1
value: 77.04399001750662
- type: euclidean_precision
value: 74.62298867162133
- type: euclidean_recall
value: 79.62734832152756
- type: manhattan_accuracy
value: 88.46004579500912
- type: manhattan_ap
value: 84.81590026238194
- type: manhattan_f1
value: 76.97804626491822
- type: manhattan_precision
value: 73.79237288135593
- type: manhattan_recall
value: 80.45118570988605
- type: max_accuracy
value: 88.46004579500912
- type: max_ap
value: 84.91526721863501
- type: max_f1
value: 77.04399001750662
---
# {gte-tiny}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It is distilled from `thenlper/gte-small`, with comparable (slightly worse) performance at around half the size.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
chris-code/multilingual-e5-large-Q8_0-GGUF | chris-code | feature-extraction | [
"sentence-transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"feature-extraction",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"base_model:intfloat/multilingual-e5-large",
"base_model:quantized:intfloat/multilingual-e5-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,724 | 1,724 | 21 | 0 | ---
base_model: intfloat/multilingual-e5-large
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- feature-extraction
- sentence-transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: multilingual-e5-large
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.05970149253731
- type: ap
value: 43.486574390835635
- type: f1
value: 73.32700092140148
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.22055674518201
- type: ap
value: 81.55756710830498
- type: f1
value: 69.28271787752661
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 80.41979010494754
- type: ap
value: 29.34879922376344
- type: f1
value: 67.62475449011278
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.8372591006424
- type: ap
value: 26.557560591210738
- type: f1
value: 64.96619417368707
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.489875
- type: ap
value: 90.98758636917603
- type: f1
value: 93.48554819717332
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.564
- type: f1
value: 46.75122173518047
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.400000000000006
- type: f1
value: 44.17195682400632
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 43.068
- type: f1
value: 42.38155696855596
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.89
- type: f1
value: 40.84407321682663
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.120000000000005
- type: f1
value: 39.522976223819114
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.832
- type: f1
value: 38.0392533394713
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.725
- type: map_at_10
value: 46.055
- type: map_at_100
value: 46.900999999999996
- type: map_at_1000
value: 46.911
- type: map_at_3
value: 41.548
- type: map_at_5
value: 44.297
- type: mrr_at_1
value: 31.152
- type: mrr_at_10
value: 46.231
- type: mrr_at_100
value: 47.07
- type: mrr_at_1000
value: 47.08
- type: mrr_at_3
value: 41.738
- type: mrr_at_5
value: 44.468999999999994
- type: ndcg_at_1
value: 30.725
- type: ndcg_at_10
value: 54.379999999999995
- type: ndcg_at_100
value: 58.138
- type: ndcg_at_1000
value: 58.389
- type: ndcg_at_3
value: 45.156
- type: ndcg_at_5
value: 50.123
- type: precision_at_1
value: 30.725
- type: precision_at_10
value: 8.087
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.54
- type: precision_at_5
value: 13.542000000000002
- type: recall_at_1
value: 30.725
- type: recall_at_10
value: 80.868
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 55.619
- type: recall_at_5
value: 67.71000000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.30960650674069
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.427074197498996
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.28270056031872
- type: mrr
value: 74.38332673789738
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.05942144105269
- type: cos_sim_spearman
value: 82.51212105850809
- type: euclidean_pearson
value: 81.95639829909122
- type: euclidean_spearman
value: 82.3717564144213
- type: manhattan_pearson
value: 81.79273425468256
- type: manhattan_spearman
value: 82.20066817871039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.46764091858039
- type: f1
value: 99.37717466945023
- type: precision
value: 99.33194154488518
- type: recall
value: 99.46764091858039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.29407880255337
- type: f1
value: 98.11248073959938
- type: precision
value: 98.02443319392472
- type: recall
value: 98.29407880255337
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.79009352268791
- type: f1
value: 97.5176076665512
- type: precision
value: 97.38136473848286
- type: recall
value: 97.79009352268791
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26276987888363
- type: f1
value: 99.20133403545726
- type: precision
value: 99.17500438827453
- type: recall
value: 99.26276987888363
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.72727272727273
- type: f1
value: 84.67672206031433
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.34220182511161
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 33.4987096128766
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.558249999999997
- type: map_at_10
value: 34.44425000000001
- type: map_at_100
value: 35.59833333333333
- type: map_at_1000
value: 35.706916666666665
- type: map_at_3
value: 31.691749999999995
- type: map_at_5
value: 33.252916666666664
- type: mrr_at_1
value: 30.252666666666666
- type: mrr_at_10
value: 38.60675
- type: mrr_at_100
value: 39.42666666666666
- type: mrr_at_1000
value: 39.48408333333334
- type: mrr_at_3
value: 36.17441666666665
- type: mrr_at_5
value: 37.56275
- type: ndcg_at_1
value: 30.252666666666666
- type: ndcg_at_10
value: 39.683
- type: ndcg_at_100
value: 44.68541666666667
- type: ndcg_at_1000
value: 46.94316666666668
- type: ndcg_at_3
value: 34.961749999999995
- type: ndcg_at_5
value: 37.215666666666664
- type: precision_at_1
value: 30.252666666666666
- type: precision_at_10
value: 6.904166666666667
- type: precision_at_100
value: 1.0989999999999995
- type: precision_at_1000
value: 0.14733333333333334
- type: precision_at_3
value: 16.037666666666667
- type: precision_at_5
value: 11.413583333333333
- type: recall_at_1
value: 25.558249999999997
- type: recall_at_10
value: 51.13341666666666
- type: recall_at_100
value: 73.08366666666667
- type: recall_at_1000
value: 88.79483333333334
- type: recall_at_3
value: 37.989083333333326
- type: recall_at_5
value: 43.787833333333325
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.338
- type: map_at_10
value: 18.360000000000003
- type: map_at_100
value: 19.942
- type: map_at_1000
value: 20.134
- type: map_at_3
value: 15.174000000000001
- type: map_at_5
value: 16.830000000000002
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 33.768
- type: mrr_at_100
value: 34.707
- type: mrr_at_1000
value: 34.766000000000005
- type: mrr_at_3
value: 30.977
- type: mrr_at_5
value: 32.528
- type: ndcg_at_1
value: 23.257
- type: ndcg_at_10
value: 25.733
- type: ndcg_at_100
value: 32.288
- type: ndcg_at_1000
value: 35.992000000000004
- type: ndcg_at_3
value: 20.866
- type: ndcg_at_5
value: 22.612
- type: precision_at_1
value: 23.257
- type: precision_at_10
value: 8.124
- type: precision_at_100
value: 1.518
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 15.679000000000002
- type: precision_at_5
value: 12.117
- type: recall_at_1
value: 10.338
- type: recall_at_10
value: 31.154
- type: recall_at_100
value: 54.161
- type: recall_at_1000
value: 75.21900000000001
- type: recall_at_3
value: 19.427
- type: recall_at_5
value: 24.214
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.498
- type: map_at_10
value: 19.103
- type: map_at_100
value: 27.375
- type: map_at_1000
value: 28.981
- type: map_at_3
value: 13.764999999999999
- type: map_at_5
value: 15.950000000000001
- type: mrr_at_1
value: 65.5
- type: mrr_at_10
value: 74.53800000000001
- type: mrr_at_100
value: 74.71799999999999
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.792
- type: mrr_at_5
value: 73.554
- type: ndcg_at_1
value: 53.37499999999999
- type: ndcg_at_10
value: 41.286
- type: ndcg_at_100
value: 45.972
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 46.172999999999995
- type: ndcg_at_5
value: 43.033
- type: precision_at_1
value: 65.5
- type: precision_at_10
value: 32.725
- type: precision_at_100
value: 10.683
- type: precision_at_1000
value: 1.978
- type: precision_at_3
value: 50
- type: precision_at_5
value: 41.349999999999994
- type: recall_at_1
value: 8.498
- type: recall_at_10
value: 25.070999999999998
- type: recall_at_100
value: 52.383
- type: recall_at_1000
value: 74.91499999999999
- type: recall_at_3
value: 15.207999999999998
- type: recall_at_5
value: 18.563
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.5
- type: f1
value: 41.93833713984145
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.914
- type: map_at_10
value: 78.10000000000001
- type: map_at_100
value: 78.333
- type: map_at_1000
value: 78.346
- type: map_at_3
value: 76.626
- type: map_at_5
value: 77.627
- type: mrr_at_1
value: 72.74199999999999
- type: mrr_at_10
value: 82.414
- type: mrr_at_100
value: 82.511
- type: mrr_at_1000
value: 82.513
- type: mrr_at_3
value: 81.231
- type: mrr_at_5
value: 82.065
- type: ndcg_at_1
value: 72.74199999999999
- type: ndcg_at_10
value: 82.806
- type: ndcg_at_100
value: 83.677
- type: ndcg_at_1000
value: 83.917
- type: ndcg_at_3
value: 80.305
- type: ndcg_at_5
value: 81.843
- type: precision_at_1
value: 72.74199999999999
- type: precision_at_10
value: 10.24
- type: precision_at_100
value: 1.089
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 31.268
- type: precision_at_5
value: 19.706000000000003
- type: recall_at_1
value: 67.914
- type: recall_at_10
value: 92.889
- type: recall_at_100
value: 96.42699999999999
- type: recall_at_1000
value: 97.92
- type: recall_at_3
value: 86.21
- type: recall_at_5
value: 90.036
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.166
- type: map_at_10
value: 35.57
- type: map_at_100
value: 37.405
- type: map_at_1000
value: 37.564
- type: map_at_3
value: 30.379
- type: map_at_5
value: 33.324
- type: mrr_at_1
value: 43.519000000000005
- type: mrr_at_10
value: 51.556000000000004
- type: mrr_at_100
value: 52.344
- type: mrr_at_1000
value: 52.373999999999995
- type: mrr_at_3
value: 48.868
- type: mrr_at_5
value: 50.319
- type: ndcg_at_1
value: 43.519000000000005
- type: ndcg_at_10
value: 43.803
- type: ndcg_at_100
value: 50.468999999999994
- type: ndcg_at_1000
value: 53.111
- type: ndcg_at_3
value: 38.893
- type: ndcg_at_5
value: 40.653
- type: precision_at_1
value: 43.519000000000005
- type: precision_at_10
value: 12.253
- type: precision_at_100
value: 1.931
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 25.617
- type: precision_at_5
value: 19.383
- type: recall_at_1
value: 22.166
- type: recall_at_10
value: 51.6
- type: recall_at_100
value: 76.574
- type: recall_at_1000
value: 92.192
- type: recall_at_3
value: 34.477999999999994
- type: recall_at_5
value: 41.835
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.041
- type: map_at_10
value: 62.961999999999996
- type: map_at_100
value: 63.79899999999999
- type: map_at_1000
value: 63.854
- type: map_at_3
value: 59.399
- type: map_at_5
value: 61.669
- type: mrr_at_1
value: 78.082
- type: mrr_at_10
value: 84.321
- type: mrr_at_100
value: 84.49600000000001
- type: mrr_at_1000
value: 84.502
- type: mrr_at_3
value: 83.421
- type: mrr_at_5
value: 83.977
- type: ndcg_at_1
value: 78.082
- type: ndcg_at_10
value: 71.229
- type: ndcg_at_100
value: 74.10900000000001
- type: ndcg_at_1000
value: 75.169
- type: ndcg_at_3
value: 66.28699999999999
- type: ndcg_at_5
value: 69.084
- type: precision_at_1
value: 78.082
- type: precision_at_10
value: 14.993
- type: precision_at_100
value: 1.7239999999999998
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 42.737
- type: precision_at_5
value: 27.843
- type: recall_at_1
value: 39.041
- type: recall_at_10
value: 74.96300000000001
- type: recall_at_100
value: 86.199
- type: recall_at_1000
value: 93.228
- type: recall_at_3
value: 64.105
- type: recall_at_5
value: 69.608
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.23160000000001
- type: ap
value: 85.5674856808308
- type: f1
value: 90.18033354786317
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.091
- type: map_at_10
value: 36.753
- type: map_at_100
value: 37.913000000000004
- type: map_at_1000
value: 37.958999999999996
- type: map_at_3
value: 32.818999999999996
- type: map_at_5
value: 35.171
- type: mrr_at_1
value: 24.742
- type: mrr_at_10
value: 37.285000000000004
- type: mrr_at_100
value: 38.391999999999996
- type: mrr_at_1000
value: 38.431
- type: mrr_at_3
value: 33.440999999999995
- type: mrr_at_5
value: 35.75
- type: ndcg_at_1
value: 24.742
- type: ndcg_at_10
value: 43.698
- type: ndcg_at_100
value: 49.145
- type: ndcg_at_1000
value: 50.23800000000001
- type: ndcg_at_3
value: 35.769
- type: ndcg_at_5
value: 39.961999999999996
- type: precision_at_1
value: 24.742
- type: precision_at_10
value: 6.7989999999999995
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.183
- type: recall_at_1
value: 24.091
- type: recall_at_10
value: 65.068
- type: recall_at_100
value: 89.899
- type: recall_at_1000
value: 98.16
- type: recall_at_3
value: 43.68
- type: recall_at_5
value: 53.754999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.66621067031465
- type: f1
value: 93.49622853272142
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.94702733164272
- type: f1
value: 91.17043441745282
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.20146764509674
- type: f1
value: 91.98359080555608
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.99780770435328
- type: f1
value: 89.19746342724068
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.78486912871998
- type: f1
value: 89.24578823628642
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.74502712477394
- type: f1
value: 89.00297573881542
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.9046967624259
- type: f1
value: 59.36787125785957
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.5280360664976
- type: f1
value: 57.17723440888718
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.44029352901934
- type: f1
value: 54.052855531072964
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 70.5606013153774
- type: f1
value: 52.62215934386531
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.11581211903908
- type: f1
value: 52.341291845645465
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.28933092224233
- type: f1
value: 57.07918745504911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.38063214525892
- type: f1
value: 59.46463723443009
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.06926698049766
- type: f1
value: 52.49084283283562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.74983187626093
- type: f1
value: 56.960640620165904
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.86550100874243
- type: f1
value: 62.47370548140688
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.971082716879636
- type: f1
value: 61.03812421957381
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.98318762609282
- type: f1
value: 51.51207916008392
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.45527908540686
- type: f1
value: 66.16631905400318
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.32750504371216
- type: f1
value: 66.16755288646591
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.09213180901143
- type: f1
value: 66.95654394661507
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.75588433086752
- type: f1
value: 71.79973779656923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.49428379287154
- type: f1
value: 68.37494379215734
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.90921318090115
- type: f1
value: 66.79517376481645
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.12104909213181
- type: f1
value: 67.29448842879584
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.34095494283793
- type: f1
value: 67.01134288992947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.61264290517822
- type: f1
value: 64.68730512660757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.79757901815738
- type: f1
value: 65.24938539425598
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.68728984532616
- type: f1
value: 67.0487169762553
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.07464694014795
- type: f1
value: 59.183532276789286
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.04707464694015
- type: f1
value: 67.66829629003848
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.42434431741762
- type: f1
value: 59.01617226544757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.53127101546738
- type: f1
value: 68.10033760906255
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.50504371217215
- type: f1
value: 69.74931103158923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.91190316072628
- type: f1
value: 54.05551136648796
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.78211163416275
- type: f1
value: 49.874888544058535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.017484868863484
- type: f1
value: 44.53364263352014
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.16207128446537
- type: f1
value: 59.01185692320829
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.42501681237391
- type: f1
value: 67.13169450166086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0780094149294
- type: f1
value: 64.41720167850707
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.57162071284466
- type: f1
value: 62.414138683804424
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.71149966375252
- type: f1
value: 58.594805125087234
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.03900470746471
- type: f1
value: 63.87937257883887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.8776059179556
- type: f1
value: 57.48587618059131
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87895090786819
- type: f1
value: 66.8141299430347
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.45057162071285
- type: f1
value: 67.46444039673516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.546738399462
- type: f1
value: 68.63640876702655
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.72965702757229
- type: f1
value: 68.54119560379115
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.35574983187625
- type: f1
value: 65.88844917691927
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.70477471418964
- type: f1
value: 69.19665697061978
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0880968392737
- type: f1
value: 64.76962317666086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.18493611297916
- type: f1
value: 62.49984559035371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.75857431069265
- type: f1
value: 69.20053687623418
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.500336247478145
- type: f1
value: 55.2972398687929
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.68997982515132
- type: f1
value: 59.36848202755348
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.01950235373235
- type: f1
value: 60.09351954625423
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.29186281102892
- type: f1
value: 67.57860496703447
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.77471418964357
- type: f1
value: 61.913983147713836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87222595830532
- type: f1
value: 66.03679033708141
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.04505716207127
- type: f1
value: 61.28569169817908
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.38466711499663
- type: f1
value: 67.20532357036844
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.12306657700067
- type: f1
value: 68.91251226588182
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.20040349697378
- type: f1
value: 66.02657347714175
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.73907195696032
- type: f1
value: 66.98484521791418
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.58843308675185
- type: f1
value: 58.95591723092005
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.22730329522528
- type: f1
value: 66.0894499712115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48285137861465
- type: f1
value: 65.21963176785157
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.74714189643578
- type: f1
value: 66.8212192745412
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.09213180901143
- type: f1
value: 56.70735546356339
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.05716207128448
- type: f1
value: 74.8413712365364
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.69737726967047
- type: f1
value: 74.7664341963
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.90383322125084
- type: f1
value: 73.59201554448323
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.51176866173503
- type: f1
value: 77.46104434577758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.31069266980496
- type: f1
value: 74.61048660675635
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.95225285810356
- type: f1
value: 72.33160006574627
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.12373907195696
- type: f1
value: 73.20921012557481
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.86684599865501
- type: f1
value: 73.82348774610831
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.40215198386012
- type: f1
value: 71.11945183971858
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.12844653665098
- type: f1
value: 71.34450495911766
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.52252858103566
- type: f1
value: 73.98878711342999
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.93611297915265
- type: f1
value: 63.723200467653385
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11903160726295
- type: f1
value: 73.82138439467096
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.15198386012105
- type: f1
value: 66.02172193802167
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.32414256893072
- type: f1
value: 74.30943421170574
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.46805648957633
- type: f1
value: 77.62808409298209
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.318762609280434
- type: f1
value: 62.094284066075076
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.34902488231338
- type: f1
value: 57.12893860987984
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.88433086751849
- type: f1
value: 48.2272350802058
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.4425016812374
- type: f1
value: 64.61463095996173
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.04707464694015
- type: f1
value: 75.05099199098998
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.50437121721586
- type: f1
value: 69.83397721096314
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.94283792871553
- type: f1
value: 68.8704663703913
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.79488903833222
- type: f1
value: 63.615424063345436
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.88231338264963
- type: f1
value: 68.57892302593237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.248150638870214
- type: f1
value: 61.06680605338809
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.84196368527236
- type: f1
value: 74.52566464968763
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.8285137861466
- type: f1
value: 74.8853197608802
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.13248150638869
- type: f1
value: 74.3982040999179
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.49024882313383
- type: f1
value: 73.82153848368573
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.72158708809684
- type: f1
value: 71.85049433180541
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.137861466039
- type: f1
value: 75.37628348188467
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.86953597848016
- type: f1
value: 71.87537624521661
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.27572293207801
- type: f1
value: 68.80017302344231
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.09952925353059
- type: f1
value: 76.07992707688408
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.140551445864155
- type: f1
value: 61.73855010331415
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.27774041694687
- type: f1
value: 64.83664868894539
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.69468728984533
- type: f1
value: 64.76239666920868
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.44653665097512
- type: f1
value: 73.14646052013873
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.71351714862139
- type: f1
value: 66.67212180163382
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.9946200403497
- type: f1
value: 73.87348793725525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.15400134498992
- type: f1
value: 67.09433241421094
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.11365164761264
- type: f1
value: 73.59502539433753
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.82582380632145
- type: f1
value: 76.89992945316313
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.81237390719569
- type: f1
value: 72.36499770986265
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.480506569594695
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.71252128004552
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.421396787056548
- type: mrr
value: 32.48155274872267
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.595
- type: map_at_10
value: 12.642000000000001
- type: map_at_100
value: 15.726
- type: map_at_1000
value: 17.061999999999998
- type: map_at_3
value: 9.125
- type: map_at_5
value: 10.866000000000001
- type: mrr_at_1
value: 43.344
- type: mrr_at_10
value: 52.227999999999994
- type: mrr_at_100
value: 52.898999999999994
- type: mrr_at_1000
value: 52.944
- type: mrr_at_3
value: 49.845
- type: mrr_at_5
value: 51.115
- type: ndcg_at_1
value: 41.949999999999996
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 30.869999999999997
- type: ndcg_at_1000
value: 39.487
- type: ndcg_at_3
value: 38.903999999999996
- type: ndcg_at_5
value: 37.236999999999995
- type: precision_at_1
value: 43.344
- type: precision_at_10
value: 25.480000000000004
- type: precision_at_100
value: 7.672
- type: precision_at_1000
value: 2.028
- type: precision_at_3
value: 36.636
- type: precision_at_5
value: 32.632
- type: recall_at_1
value: 5.595
- type: recall_at_10
value: 16.466
- type: recall_at_100
value: 31.226
- type: recall_at_1000
value: 62.778999999999996
- type: recall_at_3
value: 9.931
- type: recall_at_5
value: 12.884
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.414
- type: map_at_10
value: 56.754000000000005
- type: map_at_100
value: 57.457
- type: map_at_1000
value: 57.477999999999994
- type: map_at_3
value: 52.873999999999995
- type: map_at_5
value: 55.175
- type: mrr_at_1
value: 45.278
- type: mrr_at_10
value: 59.192
- type: mrr_at_100
value: 59.650000000000006
- type: mrr_at_1000
value: 59.665
- type: mrr_at_3
value: 56.141
- type: mrr_at_5
value: 57.998000000000005
- type: ndcg_at_1
value: 45.278
- type: ndcg_at_10
value: 64.056
- type: ndcg_at_100
value: 66.89
- type: ndcg_at_1000
value: 67.364
- type: ndcg_at_3
value: 56.97
- type: ndcg_at_5
value: 60.719
- type: precision_at_1
value: 45.278
- type: precision_at_10
value: 9.994
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.512
- type: precision_at_5
value: 17.509
- type: recall_at_1
value: 40.414
- type: recall_at_10
value: 83.596
- type: recall_at_100
value: 95.72
- type: recall_at_1000
value: 99.24
- type: recall_at_3
value: 65.472
- type: recall_at_5
value: 74.039
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.352
- type: map_at_10
value: 84.369
- type: map_at_100
value: 85.02499999999999
- type: map_at_1000
value: 85.04
- type: map_at_3
value: 81.42399999999999
- type: map_at_5
value: 83.279
- type: mrr_at_1
value: 81.05
- type: mrr_at_10
value: 87.401
- type: mrr_at_100
value: 87.504
- type: mrr_at_1000
value: 87.505
- type: mrr_at_3
value: 86.443
- type: mrr_at_5
value: 87.10799999999999
- type: ndcg_at_1
value: 81.04
- type: ndcg_at_10
value: 88.181
- type: ndcg_at_100
value: 89.411
- type: ndcg_at_1000
value: 89.507
- type: ndcg_at_3
value: 85.28099999999999
- type: ndcg_at_5
value: 86.888
- type: precision_at_1
value: 81.04
- type: precision_at_10
value: 13.406
- type: precision_at_100
value: 1.5350000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.31
- type: precision_at_5
value: 24.54
- type: recall_at_1
value: 70.352
- type: recall_at_10
value: 95.358
- type: recall_at_100
value: 99.541
- type: recall_at_1000
value: 99.984
- type: recall_at_3
value: 87.111
- type: recall_at_5
value: 91.643
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.54068723291946
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.216287629895994
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.023000000000001
- type: map_at_10
value: 10.071
- type: map_at_100
value: 11.892
- type: map_at_1000
value: 12.196
- type: map_at_3
value: 7.234
- type: map_at_5
value: 8.613999999999999
- type: mrr_at_1
value: 19.900000000000002
- type: mrr_at_10
value: 30.516
- type: mrr_at_100
value: 31.656000000000002
- type: mrr_at_1000
value: 31.723000000000003
- type: mrr_at_3
value: 27.400000000000002
- type: mrr_at_5
value: 29.270000000000003
- type: ndcg_at_1
value: 19.900000000000002
- type: ndcg_at_10
value: 17.474
- type: ndcg_at_100
value: 25.020999999999997
- type: ndcg_at_1000
value: 30.728
- type: ndcg_at_3
value: 16.588
- type: ndcg_at_5
value: 14.498
- type: precision_at_1
value: 19.900000000000002
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 2.011
- type: precision_at_1000
value: 0.33899999999999997
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 12.839999999999998
- type: recall_at_1
value: 4.023000000000001
- type: recall_at_10
value: 18.497
- type: recall_at_100
value: 40.8
- type: recall_at_1000
value: 68.812
- type: recall_at_3
value: 9.508
- type: recall_at_5
value: 12.983
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.967008785134
- type: cos_sim_spearman
value: 80.23142141101837
- type: euclidean_pearson
value: 81.20166064704539
- type: euclidean_spearman
value: 80.18961335654585
- type: manhattan_pearson
value: 81.13925443187625
- type: manhattan_spearman
value: 80.07948723044424
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.94262461316023
- type: cos_sim_spearman
value: 80.01596278563865
- type: euclidean_pearson
value: 83.80799622922581
- type: euclidean_spearman
value: 79.94984954947103
- type: manhattan_pearson
value: 83.68473841756281
- type: manhattan_spearman
value: 79.84990707951822
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 80.57346443146068
- type: cos_sim_spearman
value: 81.54689837570866
- type: euclidean_pearson
value: 81.10909881516007
- type: euclidean_spearman
value: 81.56746243261762
- type: manhattan_pearson
value: 80.87076036186582
- type: manhattan_spearman
value: 81.33074987964402
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.54733787179849
- type: cos_sim_spearman
value: 77.72202105610411
- type: euclidean_pearson
value: 78.9043595478849
- type: euclidean_spearman
value: 77.93422804309435
- type: manhattan_pearson
value: 78.58115121621368
- type: manhattan_spearman
value: 77.62508135122033
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.59880017237558
- type: cos_sim_spearman
value: 89.31088630824758
- type: euclidean_pearson
value: 88.47069261564656
- type: euclidean_spearman
value: 89.33581971465233
- type: manhattan_pearson
value: 88.40774264100956
- type: manhattan_spearman
value: 89.28657485627835
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.08055117917084
- type: cos_sim_spearman
value: 85.78491813080304
- type: euclidean_pearson
value: 84.99329155500392
- type: euclidean_spearman
value: 85.76728064677287
- type: manhattan_pearson
value: 84.87947428989587
- type: manhattan_spearman
value: 85.62429454917464
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.14190939287384
- type: cos_sim_spearman
value: 82.27331573306041
- type: euclidean_pearson
value: 81.891896953716
- type: euclidean_spearman
value: 82.37695542955998
- type: manhattan_pearson
value: 81.73123869460504
- type: manhattan_spearman
value: 82.19989168441421
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.84695301843362
- type: cos_sim_spearman
value: 77.87790986014461
- type: euclidean_pearson
value: 76.91981583106315
- type: euclidean_spearman
value: 77.88154772749589
- type: manhattan_pearson
value: 76.94953277451093
- type: manhattan_spearman
value: 77.80499230728604
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.44657840482016
- type: cos_sim_spearman
value: 75.05531095119674
- type: euclidean_pearson
value: 75.88161755829299
- type: euclidean_spearman
value: 74.73176238219332
- type: manhattan_pearson
value: 75.63984765635362
- type: manhattan_spearman
value: 74.86476440770737
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.64700140524133
- type: cos_sim_spearman
value: 86.16014210425672
- type: euclidean_pearson
value: 86.49086860843221
- type: euclidean_spearman
value: 86.09729326815614
- type: manhattan_pearson
value: 86.43406265125513
- type: manhattan_spearman
value: 86.17740150939994
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.91170098764921
- type: cos_sim_spearman
value: 88.12437004058931
- type: euclidean_pearson
value: 88.81828254494437
- type: euclidean_spearman
value: 88.14831794572122
- type: manhattan_pearson
value: 88.93442183448961
- type: manhattan_spearman
value: 88.15254630778304
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.91390577997292
- type: cos_sim_spearman
value: 71.22979457536074
- type: euclidean_pearson
value: 74.40314008106749
- type: euclidean_spearman
value: 72.54972136083246
- type: manhattan_pearson
value: 73.85687539530218
- type: manhattan_spearman
value: 72.09500771742637
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.9301067983089
- type: cos_sim_spearman
value: 80.74989828346473
- type: euclidean_pearson
value: 81.36781301814257
- type: euclidean_spearman
value: 80.9448819964426
- type: manhattan_pearson
value: 81.0351322685609
- type: manhattan_spearman
value: 80.70192121844177
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.13820465980005
- type: cos_sim_spearman
value: 86.73532498758757
- type: euclidean_pearson
value: 87.21329451846637
- type: euclidean_spearman
value: 86.57863198601002
- type: manhattan_pearson
value: 87.06973713818554
- type: manhattan_spearman
value: 86.47534918791499
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.48720108904415
- type: cos_sim_spearman
value: 85.62221757068387
- type: euclidean_pearson
value: 86.1010129512749
- type: euclidean_spearman
value: 85.86580966509942
- type: manhattan_pearson
value: 86.26800938808971
- type: manhattan_spearman
value: 85.88902721678429
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 83.98021347333516
- type: cos_sim_spearman
value: 84.53806553803501
- type: euclidean_pearson
value: 84.61483347248364
- type: euclidean_spearman
value: 85.14191408011702
- type: manhattan_pearson
value: 84.75297588825967
- type: manhattan_spearman
value: 85.33176753669242
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.51856644893233
- type: cos_sim_spearman
value: 85.27510748506413
- type: euclidean_pearson
value: 85.09886861540977
- type: euclidean_spearman
value: 85.62579245860887
- type: manhattan_pearson
value: 84.93017860464607
- type: manhattan_spearman
value: 85.5063988898453
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.581573200584195
- type: cos_sim_spearman
value: 63.05503590247928
- type: euclidean_pearson
value: 63.652564812602094
- type: euclidean_spearman
value: 62.64811520876156
- type: manhattan_pearson
value: 63.506842893061076
- type: manhattan_spearman
value: 62.51289573046917
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.2248801729127
- type: cos_sim_spearman
value: 56.5936604678561
- type: euclidean_pearson
value: 43.98149464089
- type: euclidean_spearman
value: 56.108561882423615
- type: manhattan_pearson
value: 43.86880305903564
- type: manhattan_spearman
value: 56.04671150510166
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.17564527009831
- type: cos_sim_spearman
value: 64.57978560979488
- type: euclidean_pearson
value: 58.8818330154583
- type: euclidean_spearman
value: 64.99214839071281
- type: manhattan_pearson
value: 58.72671436121381
- type: manhattan_spearman
value: 65.10713416616109
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 26.772131864023297
- type: cos_sim_spearman
value: 34.68200792408681
- type: euclidean_pearson
value: 16.68082419005441
- type: euclidean_spearman
value: 34.83099932652166
- type: manhattan_pearson
value: 16.52605949659529
- type: manhattan_spearman
value: 34.82075801399475
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.42415189043831
- type: cos_sim_spearman
value: 63.54594264576758
- type: euclidean_pearson
value: 57.36577498297745
- type: euclidean_spearman
value: 63.111466379158074
- type: manhattan_pearson
value: 57.584543715873885
- type: manhattan_spearman
value: 63.22361054139183
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.55216762405518
- type: cos_sim_spearman
value: 56.98670142896412
- type: euclidean_pearson
value: 50.15318757562699
- type: euclidean_spearman
value: 56.524941926541906
- type: manhattan_pearson
value: 49.955618528674904
- type: manhattan_spearman
value: 56.37102209240117
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 49.20540980338571
- type: cos_sim_spearman
value: 59.9009453504406
- type: euclidean_pearson
value: 49.557749853620535
- type: euclidean_spearman
value: 59.76631621172456
- type: manhattan_pearson
value: 49.62340591181147
- type: manhattan_spearman
value: 59.94224880322436
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 51.508169956576985
- type: cos_sim_spearman
value: 66.82461565306046
- type: euclidean_pearson
value: 56.2274426480083
- type: euclidean_spearman
value: 66.6775323848333
- type: manhattan_pearson
value: 55.98277796300661
- type: manhattan_spearman
value: 66.63669848497175
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.86478788045507
- type: cos_sim_spearman
value: 76.7946552053193
- type: euclidean_pearson
value: 75.01598530490269
- type: euclidean_spearman
value: 76.83618917858281
- type: manhattan_pearson
value: 74.68337628304332
- type: manhattan_spearman
value: 76.57480204017773
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.922619099401984
- type: cos_sim_spearman
value: 56.599362477240774
- type: euclidean_pearson
value: 56.68307052369783
- type: euclidean_spearman
value: 54.28760436777401
- type: manhattan_pearson
value: 56.67763566500681
- type: manhattan_spearman
value: 53.94619541711359
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.74357206710913
- type: cos_sim_spearman
value: 72.5208244925311
- type: euclidean_pearson
value: 67.49254562186032
- type: euclidean_spearman
value: 72.02469076238683
- type: manhattan_pearson
value: 67.45251772238085
- type: manhattan_spearman
value: 72.05538819984538
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.25734330033191
- type: cos_sim_spearman
value: 76.98349083946823
- type: euclidean_pearson
value: 73.71642838667736
- type: euclidean_spearman
value: 77.01715504651384
- type: manhattan_pearson
value: 73.61712711868105
- type: manhattan_spearman
value: 77.01392571153896
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.18215462781212
- type: cos_sim_spearman
value: 65.54373266117607
- type: euclidean_pearson
value: 64.54126095439005
- type: euclidean_spearman
value: 65.30410369102711
- type: manhattan_pearson
value: 63.50332221148234
- type: manhattan_spearman
value: 64.3455878104313
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30509221440029
- type: cos_sim_spearman
value: 65.99582704642478
- type: euclidean_pearson
value: 63.43818859884195
- type: euclidean_spearman
value: 66.83172582815764
- type: manhattan_pearson
value: 63.055779168508764
- type: manhattan_spearman
value: 65.49585020501449
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.587830825340404
- type: cos_sim_spearman
value: 68.93467614588089
- type: euclidean_pearson
value: 62.3073527367404
- type: euclidean_spearman
value: 69.69758171553175
- type: manhattan_pearson
value: 61.9074580815789
- type: manhattan_spearman
value: 69.57696375597865
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.143220125577066
- type: cos_sim_spearman
value: 67.78857859159226
- type: euclidean_pearson
value: 55.58225107923733
- type: euclidean_spearman
value: 67.80662907184563
- type: manhattan_pearson
value: 56.24953502726514
- type: manhattan_spearman
value: 67.98262125431616
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 21.826928900322066
- type: cos_sim_spearman
value: 49.578506634400405
- type: euclidean_pearson
value: 27.939890138843214
- type: euclidean_spearman
value: 52.71950519136242
- type: manhattan_pearson
value: 26.39878683847546
- type: manhattan_spearman
value: 47.54609580342499
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.27603854632001
- type: cos_sim_spearman
value: 50.709255283710995
- type: euclidean_pearson
value: 59.5419024445929
- type: euclidean_spearman
value: 50.709255283710995
- type: manhattan_pearson
value: 59.03256832438492
- type: manhattan_spearman
value: 61.97797868009122
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.00757054859712
- type: cos_sim_spearman
value: 87.29283629622222
- type: euclidean_pearson
value: 86.54824171775536
- type: euclidean_spearman
value: 87.24364730491402
- type: manhattan_pearson
value: 86.5062156915074
- type: manhattan_spearman
value: 87.15052170378574
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.03549357197389
- type: mrr
value: 95.05437645143527
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.260999999999996
- type: map_at_10
value: 66.259
- type: map_at_100
value: 66.884
- type: map_at_1000
value: 66.912
- type: map_at_3
value: 63.685
- type: map_at_5
value: 65.35499999999999
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 67.5
- type: mrr_at_100
value: 68.013
- type: mrr_at_1000
value: 68.038
- type: mrr_at_3
value: 65.61099999999999
- type: mrr_at_5
value: 66.861
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 70.41
- type: ndcg_at_100
value: 73.10600000000001
- type: ndcg_at_1000
value: 73.846
- type: ndcg_at_3
value: 66.133
- type: ndcg_at_5
value: 68.499
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.232999999999999
- type: precision_at_100
value: 1.0630000000000002
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.667
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 57.260999999999996
- type: recall_at_10
value: 81.94399999999999
- type: recall_at_100
value: 93.867
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.339
- type: recall_at_5
value: 76.25
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74356435643564
- type: cos_sim_ap
value: 93.13411948212683
- type: cos_sim_f1
value: 86.80521991300147
- type: cos_sim_precision
value: 84.00374181478017
- type: cos_sim_recall
value: 89.8
- type: dot_accuracy
value: 99.67920792079208
- type: dot_ap
value: 89.27277565444479
- type: dot_f1
value: 83.9276990718124
- type: dot_precision
value: 82.04393505253104
- type: dot_recall
value: 85.9
- type: euclidean_accuracy
value: 99.74257425742574
- type: euclidean_ap
value: 93.17993008259062
- type: euclidean_f1
value: 86.69396110542476
- type: euclidean_precision
value: 88.78406708595388
- type: euclidean_recall
value: 84.7
- type: manhattan_accuracy
value: 99.74257425742574
- type: manhattan_ap
value: 93.14413755550099
- type: manhattan_f1
value: 86.82483594144371
- type: manhattan_precision
value: 87.66564729867483
- type: manhattan_recall
value: 86
- type: max_accuracy
value: 99.74356435643564
- type: max_ap
value: 93.17993008259062
- type: max_f1
value: 86.82483594144371
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.525863806168566
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.68850574423839
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.71580650644033
- type: mrr
value: 50.50971903913081
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.152190498799484
- type: cos_sim_spearman
value: 29.686180371952727
- type: dot_pearson
value: 27.248664793816342
- type: dot_spearman
value: 28.37748983721745
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.20400000000000001
- type: map_at_10
value: 1.6209999999999998
- type: map_at_100
value: 9.690999999999999
- type: map_at_1000
value: 23.733
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.885
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.56700000000001
- type: mrr_at_100
value: 86.56700000000001
- type: mrr_at_1000
value: 86.56700000000001
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 86.56700000000001
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 71.326
- type: ndcg_at_100
value: 54.208999999999996
- type: ndcg_at_1000
value: 49.252
- type: ndcg_at_3
value: 74.235
- type: ndcg_at_5
value: 73.833
- type: precision_at_1
value: 78
- type: precision_at_10
value: 74.8
- type: precision_at_100
value: 55.50000000000001
- type: precision_at_1000
value: 21.836
- type: precision_at_3
value: 78
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.20400000000000001
- type: recall_at_10
value: 1.894
- type: recall_at_100
value: 13.245999999999999
- type: recall_at_1000
value: 46.373
- type: recall_at_3
value: 0.613
- type: recall_at_5
value: 0.991
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.69999999999999
- type: precision
value: 94.11666666666667
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.20809248554913
- type: f1
value: 63.431048720066066
- type: precision
value: 61.69143958161298
- type: recall
value: 68.20809248554913
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.21951219512195
- type: f1
value: 66.82926829268293
- type: precision
value: 65.1260162601626
- type: recall
value: 71.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.26666666666667
- type: precision
value: 95.8
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.3
- type: f1
value: 99.06666666666666
- type: precision
value: 98.95
- type: recall
value: 99.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.63333333333333
- type: precision
value: 96.26666666666668
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.86666666666666
- type: precision
value: 94.31666666666668
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.01492537313433
- type: f1
value: 40.178867566927266
- type: precision
value: 38.179295828549556
- type: recall
value: 47.01492537313433
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.62537480063796
- type: precision
value: 82.44555555555554
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.48780487804879
- type: f1
value: 75.45644599303138
- type: precision
value: 73.37398373983739
- type: recall
value: 80.48780487804879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.95666666666666
- type: precision
value: 91.125
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.73754556500607
- type: f1
value: 89.65168084244632
- type: precision
value: 88.73025516403402
- type: recall
value: 91.73754556500607
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.04347826086956
- type: f1
value: 76.2128364389234
- type: precision
value: 74.2
- type: recall
value: 81.04347826086956
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.65217391304348
- type: f1
value: 79.4376811594203
- type: precision
value: 77.65797101449274
- type: recall
value: 83.65217391304348
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.5
- type: f1
value: 85.02690476190476
- type: precision
value: 83.96261904761904
- type: recall
value: 87.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.3
- type: f1
value: 86.52333333333333
- type: precision
value: 85.22833333333332
- type: recall
value: 89.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.01809408926418
- type: f1
value: 59.00594446432805
- type: precision
value: 56.827215807915444
- type: recall
value: 65.01809408926418
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.2
- type: f1
value: 88.58
- type: precision
value: 87.33333333333334
- type: recall
value: 91.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.199999999999996
- type: f1
value: 53.299166276284915
- type: precision
value: 51.3383908045977
- type: recall
value: 59.199999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.2
- type: precision
value: 90.25
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.76190476190476
- type: f1
value: 59.867110667110666
- type: precision
value: 58.07390192653351
- type: recall
value: 64.76190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.2
- type: f1
value: 71.48147546897547
- type: precision
value: 69.65409090909091
- type: recall
value: 76.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.8
- type: f1
value: 92.14
- type: precision
value: 91.35833333333333
- type: recall
value: 93.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.2
- type: precision
value: 96.85000000000001
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 92.93333333333334
- type: precision
value: 92.13333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.1
- type: f1
value: 69.14817460317461
- type: precision
value: 67.2515873015873
- type: recall
value: 74.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 94.01333333333335
- type: precision
value: 93.46666666666667
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.9
- type: f1
value: 72.07523809523809
- type: precision
value: 70.19777777777779
- type: recall
value: 76.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.31666666666666
- type: precision
value: 91.43333333333332
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.76666666666668
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.85714285714286
- type: f1
value: 90.92093441150045
- type: precision
value: 90.00449236298293
- type: recall
value: 92.85714285714286
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.16239316239316
- type: f1
value: 91.33903133903132
- type: precision
value: 90.56267806267806
- type: recall
value: 93.16239316239316
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.25666666666666
- type: precision
value: 89.25833333333334
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.22727272727272
- type: f1
value: 87.53030303030303
- type: precision
value: 86.37121212121211
- type: recall
value: 90.22727272727272
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.03563941299791
- type: f1
value: 74.7349505840072
- type: precision
value: 72.9035639412998
- type: recall
value: 79.03563941299791
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97
- type: f1
value: 96.15
- type: precision
value: 95.76666666666668
- type: recall
value: 97
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.26459143968872
- type: f1
value: 71.55642023346303
- type: precision
value: 69.7544932369835
- type: recall
value: 76.26459143968872
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.119658119658126
- type: f1
value: 51.65242165242165
- type: precision
value: 49.41768108434775
- type: recall
value: 58.119658119658126
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.3
- type: f1
value: 69.52055555555555
- type: precision
value: 67.7574938949939
- type: recall
value: 74.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.31666666666666
- type: precision
value: 92.60000000000001
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.63551401869158
- type: f1
value: 72.35202492211837
- type: precision
value: 70.60358255451713
- type: recall
value: 76.63551401869158
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.4811111111111
- type: precision
value: 87.7452380952381
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95
- type: f1
value: 93.60666666666667
- type: precision
value: 92.975
- type: recall
value: 95
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 63.01595782872099
- type: precision
value: 61.596587301587306
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.52999999999999
- type: precision
value: 94
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.28999999999999
- type: precision
value: 92.675
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.75
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.9
- type: f1
value: 89.83
- type: precision
value: 88.92
- type: recall
value: 91.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34222222222223
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.333333333333336
- type: f1
value: 55.31203703703703
- type: precision
value: 53.39971108326371
- type: recall
value: 60.333333333333336
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 12.9
- type: f1
value: 11.099861903031458
- type: precision
value: 10.589187932631877
- type: recall
value: 12.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7
- type: f1
value: 83.0152380952381
- type: precision
value: 81.37833333333333
- type: recall
value: 86.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.39285714285714
- type: f1
value: 56.832482993197274
- type: precision
value: 54.56845238095237
- type: recall
value: 63.39285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.73765093304062
- type: f1
value: 41.555736920720456
- type: precision
value: 39.06874531737319
- type: recall
value: 48.73765093304062
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 41.099999999999994
- type: f1
value: 36.540165945165946
- type: precision
value: 35.05175685425686
- type: recall
value: 41.099999999999994
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.42333333333333
- type: precision
value: 92.75833333333333
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.63333333333334
- type: precision
value: 93.01666666666665
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.64833333333334
- type: precision
value: 71.90282106782105
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.4
- type: f1
value: 54.90521367521367
- type: precision
value: 53.432840025471606
- type: recall
value: 59.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.6
- type: precision
value: 96.2
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 62.25926129426129
- type: precision
value: 60.408376623376626
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.60666666666667
- type: precision
value: 86.45277777777778
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97
- type: precision
value: 96.65
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39746031746031
- type: precision
value: 90.6125
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 32.11678832116788
- type: f1
value: 27.210415386260234
- type: precision
value: 26.20408990846947
- type: recall
value: 32.11678832116788
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.787319277832475
- type: precision
value: 6.3452094433344435
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.08
- type: precision
value: 94.61666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.3
- type: f1
value: 93.88333333333333
- type: precision
value: 93.18333333333332
- type: recall
value: 95.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.11904761904762
- type: f1
value: 80.69444444444444
- type: precision
value: 78.72023809523809
- type: recall
value: 85.11904761904762
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 11.1
- type: f1
value: 9.276381801735853
- type: precision
value: 8.798174603174601
- type: recall
value: 11.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.56107660455487
- type: f1
value: 58.70433569191332
- type: precision
value: 56.896926581464015
- type: recall
value: 63.56107660455487
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.10000000000001
- type: precision
value: 92.35
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 96.01222222222222
- type: precision
value: 95.67083333333332
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.2
- type: f1
value: 7.911555250305249
- type: precision
value: 7.631246556216846
- type: recall
value: 9.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.48917748917748
- type: f1
value: 72.27375798804371
- type: precision
value: 70.14430014430013
- type: recall
value: 77.48917748917748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.09923664122137
- type: f1
value: 72.61541257724463
- type: precision
value: 70.8998380754106
- type: recall
value: 77.09923664122137
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2532751091703
- type: f1
value: 97.69529354682193
- type: precision
value: 97.42843279961184
- type: recall
value: 98.2532751091703
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.8
- type: f1
value: 79.14672619047619
- type: precision
value: 77.59489247311828
- type: recall
value: 82.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.35028248587571
- type: f1
value: 92.86252354048965
- type: precision
value: 92.2080979284369
- type: recall
value: 94.35028248587571
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.282429263935621
- type: precision
value: 5.783274240739785
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 91.025
- type: precision
value: 90.30428571428571
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81
- type: f1
value: 77.8232380952381
- type: precision
value: 76.60194444444444
- type: recall
value: 81
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91
- type: f1
value: 88.70857142857142
- type: precision
value: 87.7
- type: recall
value: 91
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.3
- type: precision
value: 94.76666666666667
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.1
- type: f1
value: 7.001008218834307
- type: precision
value: 6.708329562594269
- type: recall
value: 8.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.1313672922252
- type: f1
value: 84.09070598748882
- type: precision
value: 82.79171454104429
- type: recall
value: 87.1313672922252
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.73333333333332
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.29249011857708
- type: f1
value: 36.981018542283365
- type: precision
value: 35.415877813576024
- type: recall
value: 42.29249011857708
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.80281690140845
- type: f1
value: 80.86854460093896
- type: precision
value: 79.60093896713614
- type: recall
value: 83.80281690140845
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.26946107784431
- type: f1
value: 39.80235464678088
- type: precision
value: 38.14342660001342
- type: recall
value: 45.26946107784431
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.9
- type: precision
value: 92.26666666666668
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.93103448275862
- type: f1
value: 33.15192743764172
- type: precision
value: 31.57456528146183
- type: recall
value: 37.93103448275862
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.01408450704226
- type: f1
value: 63.41549295774648
- type: precision
value: 61.342778895595806
- type: recall
value: 69.01408450704226
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.66666666666667
- type: f1
value: 71.60705960705961
- type: precision
value: 69.60683760683762
- type: recall
value: 76.66666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.48333333333333
- type: precision
value: 93.83333333333333
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 52.81837160751566
- type: f1
value: 48.435977731384824
- type: precision
value: 47.11291973845539
- type: recall
value: 52.81837160751566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.9
- type: f1
value: 38.88962621607783
- type: precision
value: 36.95936507936508
- type: recall
value: 44.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.55374592833876
- type: f1
value: 88.22553125484721
- type: precision
value: 87.26927252985884
- type: recall
value: 90.55374592833876
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.13333333333333
- type: precision
value: 92.45333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.99666666666667
- type: precision
value: 91.26666666666668
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.03937007874016
- type: f1
value: 81.75853018372703
- type: precision
value: 80.34120734908137
- type: recall
value: 85.03937007874016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.3
- type: f1
value: 85.5
- type: precision
value: 84.25833333333334
- type: recall
value: 88.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.51246537396122
- type: f1
value: 60.02297410192148
- type: precision
value: 58.133467727289236
- type: recall
value: 65.51246537396122
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.89
- type: precision
value: 94.39166666666667
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.692307692307686
- type: f1
value: 53.162393162393165
- type: precision
value: 51.70673076923077
- type: recall
value: 57.692307692307686
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.60000000000001
- type: f1
value: 89.21190476190475
- type: precision
value: 88.08666666666667
- type: recall
value: 91.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88
- type: f1
value: 85.47
- type: precision
value: 84.43266233766234
- type: recall
value: 88
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 90.64999999999999
- type: precision
value: 89.68333333333332
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.30660377358491
- type: f1
value: 76.33044137466307
- type: precision
value: 74.78970125786164
- type: recall
value: 80.30660377358491
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.44
- type: precision
value: 94.99166666666666
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.53284671532847
- type: f1
value: 95.37712895377129
- type: precision
value: 94.7992700729927
- type: recall
value: 96.53284671532847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89
- type: f1
value: 86.23190476190476
- type: precision
value: 85.035
- type: recall
value: 89
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.585
- type: map_at_10
value: 9.012
- type: map_at_100
value: 14.027000000000001
- type: map_at_1000
value: 15.565000000000001
- type: map_at_3
value: 5.032
- type: map_at_5
value: 6.657
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 45.377
- type: mrr_at_100
value: 46.119
- type: mrr_at_1000
value: 46.127
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 42.585
- type: ndcg_at_1
value: 27.551
- type: ndcg_at_10
value: 23.395
- type: ndcg_at_100
value: 33.342
- type: ndcg_at_1000
value: 45.523
- type: ndcg_at_3
value: 25.158
- type: ndcg_at_5
value: 23.427
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 21.429000000000002
- type: precision_at_100
value: 6.714
- type: precision_at_1000
value: 1.473
- type: precision_at_3
value: 27.211000000000002
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.585
- type: recall_at_10
value: 15.418999999999999
- type: recall_at_100
value: 42.485
- type: recall_at_1000
value: 79.536
- type: recall_at_3
value: 6.239999999999999
- type: recall_at_5
value: 8.996
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.3234
- type: ap
value: 14.361688653847423
- type: f1
value: 54.819068624319044
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.97792869269949
- type: f1
value: 62.28965628513728
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 38.90540145385218
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.53513739047506
- type: cos_sim_ap
value: 75.27741586677557
- type: cos_sim_f1
value: 69.18792902473774
- type: cos_sim_precision
value: 67.94708725515136
- type: cos_sim_recall
value: 70.47493403693932
- type: dot_accuracy
value: 84.7052512368123
- type: dot_ap
value: 69.36075482849378
- type: dot_f1
value: 64.44688376631296
- type: dot_precision
value: 59.92288500793831
- type: dot_recall
value: 69.70976253298153
- type: euclidean_accuracy
value: 86.60666388508076
- type: euclidean_ap
value: 75.47512772621097
- type: euclidean_f1
value: 69.413872536473
- type: euclidean_precision
value: 67.39562624254472
- type: euclidean_recall
value: 71.55672823218997
- type: manhattan_accuracy
value: 86.52917684925792
- type: manhattan_ap
value: 75.34000110496703
- type: manhattan_f1
value: 69.28489190226429
- type: manhattan_precision
value: 67.24608889992551
- type: manhattan_recall
value: 71.45118733509234
- type: max_accuracy
value: 86.60666388508076
- type: max_ap
value: 75.47512772621097
- type: max_f1
value: 69.413872536473
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.01695967710637
- type: cos_sim_ap
value: 85.8298270742901
- type: cos_sim_f1
value: 78.46988128389272
- type: cos_sim_precision
value: 74.86017897091722
- type: cos_sim_recall
value: 82.44533415460425
- type: dot_accuracy
value: 88.19420188613343
- type: dot_ap
value: 83.82679165901324
- type: dot_f1
value: 76.55833777304208
- type: dot_precision
value: 75.6884875846501
- type: dot_recall
value: 77.44841392054204
- type: euclidean_accuracy
value: 89.03054294252338
- type: euclidean_ap
value: 85.89089555185325
- type: euclidean_f1
value: 78.62997658079624
- type: euclidean_precision
value: 74.92329149232914
- type: euclidean_recall
value: 82.72251308900523
- type: manhattan_accuracy
value: 89.0266620095471
- type: manhattan_ap
value: 85.86458997929147
- type: manhattan_f1
value: 78.50685331000291
- type: manhattan_precision
value: 74.5499861534201
- type: manhattan_recall
value: 82.90729904527257
- type: max_accuracy
value: 89.03054294252338
- type: max_ap
value: 85.89089555185325
- type: max_f1
value: 78.62997658079624
---
# chris-code/multilingual-e5-large-Q8_0-GGUF
This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo chris-code/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo chris-code/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo chris-code/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo chris-code/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
davanstrien/contrastive-bert-base-uncased | davanstrien | sentence-similarity | [
"sentence-transformers",
"tensorboard",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1407",
"loss:Contrastive",
"arxiv:1908.10084",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,726 | 1,728 | 7 | 0 | ---
base_model: google-bert/bert-base-uncased
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1407
- loss:Contrastive
widget:
- source_sentence: fact checking dataset
sentences:
- "DATASET_NAME: \"mwong/fever-claim-related\"\nFEATURES: {'claim': {'dtype': 'string',\
\ '_type': 'Value'}, 'evidence': {'dtype': 'string', '_type': 'Value'}, 'labels':\
\ {'dtype': 'int64', '_type': 'Value'}, 'input_ids': {'feature': {'dtype': 'int32',\
\ '_type': 'Value'}, '_type': 'Sequence'}, 'token_type_ids': {'feature': {'dtype':\
\ 'int8', '_type': 'Value'}, '_type': 'Sequence'}, 'attention_mask': {'feature':\
\ {'dtype': 'int8', '_type': 'Value'}, '_type': 'Sequence'}}\nDATA SAMPLE:\n[\n\
\ {\n \"row_idx\": 0,\n \"row\": {\n \"claim\": \"Nikolaj Coster-Waldau\
\ worked with the Fox Broadcasting Company.\",\n \"evidence\": \"Nikolaj\
\ Coster-Waldau -LRB- -LSB- ne\\u0261ola\\u026a k\\u02b0\\u028csd\\u0250 \\u02c8\\\
u028bald\\u0251\\u028a -RSB- ; born 27 July 1970 -RRB- is a Danish actor , producer\
\ and screenwriter .. He graduated from Danish National School of Theatre in Copenhagen\
\ in 1993 .. Danish National School of Theatre. Danish National School of Theatre\
\ and Contemporary Dance. Copenhagen. Copenhagen. Coster-Waldau 's breakthrough\
\ performance in Denmark was his role in the film Nightwatch -LRB- 1994 -RRB-\
\ .. Nightwatch. Nightwatch ( 1994 film ). Since then he has appeared in numerous\
\ films in his native Scandinavia and Europe in general , including Headhunters\
\ -LRB- 2011 -RRB- and A Thousand Times Good Night -LRB- 2013 -RRB- .. Headhunters.\
\ Headhunters ( film ). A Thousand Times Good Night. A Thousand Times Good Night.\
\ In the United States , his debut film role was in the war film Black Hawk Down\
\ -LRB- 2001 -RRB- , playing Medal of Honor recipient Gary Gordon .. Black Hawk\
\ Down. Black Hawk Down ( film ). Gary Gordon. Gary Gordon. He then played Detective\
\ John Amsterdam in the short-lived Fox television series New Amsterdam -LRB-\
\ 2008 -RRB- , as well as appearing as Frank Pike in the 2009 Fox television film\
\ Virtuality , originally intended as a pilot .. Fox television. Fox Broadcasting\
\ Company. New Amsterdam. New Amsterdam ( TV series ). Virtuality. Virtuality\
\ ( TV series ). He became widely known to a broad audience for his current role\
\ as Ser Jaime Lannister , in the HBO series Game of Thrones .. Jaime Lannister.\
\ Jaime Lannister. HBO. HBO. Game of Thrones. Game of Thrones. In 2017 , he became\
\ one of the highest paid actors on television and earned # 2 million per episode\
\ of Game of Thrones .. Game of Thrones. Game of Thrones. highest paid actors\
\ on television. List of highest paid American television stars\",\n \"labels\"\
: 1,\n \"input_ids\": [\n 2,\n 14569,\n 728,\n \
\ 1516,\n 106,\n 8,\n 6501,\n 1346,\n 13,\n\
\ 8,\n 255,\n 7549,\n 8,\n 13,\n 8,\n\
\ 7532,\n 220,\n 8,\n 2836,\n 1,\n 2268,\n\
\ 1,\n 9190,\n 1,\n 18,\n 43,\n 1,\n\
\ 13,\n 1,\n 8100,\n 1,\n 13,\n 8,\n\
\ 1224,\n 220,\n 8,\n 13,\n 73,\n 386,\n\
\ 1298,\n 313,\n 1133,\n 13,\n 8,\n \
\ 139,\n 7549,\n 8,\n 25,\n 21,\n 4708,\n \
\ 1574,\n 13,\n 15,\n 1421,\n 17,\n 11993,\n\
\ 13,\n 9,\n 9,\n 24,\n 2158,\n 37,\n\
\ 4708,\n 152,\n 116,\n 16,\n 1081,\n \
\ 19,\n 10088,\n 19,\n 1609,\n 13,\n 9,\n\
\ 9,\n 4708,\n 152,\n 116,\n 16,\n 1081,\n\
\ 9,\n 4708,\n 152,\n 116,\n 16,\n 1081,\n\
\ 17,\n 2152,\n 1329,\n 9,\n 10088,\n \
\ 9,\n 10088,\n 9,\n 1516,\n 106,\n 8,\n\
\ 6501,\n 1346,\n 13,\n 22,\n 18,\n \
\ 13476,\n 956,\n 19,\n 4913,\n 23,\n 33,\n\
\ 597,\n 19,\n 14,\n 171,\n 343,\n 9054,\n\
\ 13,\n 8,\n 255,\n 7549,\n 8,\n 1530,\n\
\ 13,\n 8,\n 139,\n 7549,\n 8,\n 13,\n\
\ 9,\n 9,\n 343,\n 9054,\n 9,\n 343,\n\
\ 9054,\n 13,\n 5,\n 1530,\n 171,\n \
\ 13,\n 6,\n 9,\n 179,\n 94,\n 24,\n \
\ 63,\n 682,\n 19,\n 1548,\n 1491,\n 19,\n\
\ 33,\n 1275,\n 23335,\n 17,\n 1002,\n \
\ 19,\n 297,\n 13,\n 15,\n 215,\n 157,\n\
\ 15214,\n 18,\n 13,\n 8,\n 255,\n 7549,\n\
\ 8,\n 542,\n 13,\n 8,\n 139,\n 7549,\n\
\ 8,\n 17,\n 21,\n 2996,\n 436,\n 254,\n\
\ 343,\n 13,\n 8,\n 255,\n 7549,\n 8,\n\
\ 616,\n 13,\n 8,\n 139,\n 7549,\n 8,\n\
\ 13,\n 9,\n 9,\n 157,\n 15214,\n 18,\n\
\ 9,\n 157,\n 15214,\n 18,\n 13,\n 5,\n\
\ 171,\n 13,\n 6,\n 9,\n 21,\n 2996,\n\
\ 436,\n 254,\n 343,\n 9,\n 21,\n 2996,\n\
\ 436,\n 254,\n 343,\n 9,\n 19,\n 14,\n\
\ 181,\n 202,\n 13,\n 15,\n 33,\n 893,\n\
\ 171,\n 597,\n 23,\n 19,\n 14,\n 176,\n\
\ 171,\n 319,\n 8596,\n 125,\n 13,\n \
\ 8,\n 255,\n 7549,\n 8,\n 1089,\n 13,\n \
\ 8,\n 139,\n 7549,\n 8,\n 13,\n 15,\n\
\ 791,\n 1217,\n 16,\n 1853,\n 7340,\n \
\ 4443,\n 3791,\n 13,\n 9,\n 9,\n 319,\n\
\ 8596,\n 125,\n 9,\n 319,\n 8596,\n \
\ 125,\n 13,\n 5,\n 171,\n 13,\n 6,\n \
\ 9,\n 4443,\n 3791,\n 9,\n 4443,\n 3791,\n\
\ 9,\n 24,\n 94,\n 257,\n 4485,\n 239,\n\
\ 7066,\n 19,\n 14,\n 502,\n 8,\n 8279,\n\
\ 2385,\n 633,\n 231,\n 78,\n 7066,\n \
\ 13,\n 8,\n 255,\n 7549,\n 8,\n 570,\n \
\ 13,\n 8,\n 139,\n 7549,\n 8,\n 13,\n\
\ 15,\n 28,\n 134,\n 28,\n 4870,\n 28,\n\
\ 1556,\n 12333,\n 19,\n 14,\n 588,\n \
\ 2385,\n 633,\n 171,\n 6599,\n 856,\n 13,\n\
\ 15,\n 912,\n 2081,\n 28,\n 21,\n 2648,\n\
\ 13,\n 9,\n 9,\n 2385,\n 633,\n 9,\n\
\ 2385,\n 3647,\n 237,\n 9,\n 78,\n \
\ 7066,\n 9,\n 78,\n 7066,\n 13,\n 5,\n \
\ 983,\n 231,\n 13,\n 6,\n 9,\n 6599,\n\
\ 856,\n 9,\n 6599,\n 856,\n 13,\n 5,\n\
\ 983,\n 231,\n 13,\n 6,\n 9,\n 24,\n\
\ 178,\n 2525,\n 167,\n 20,\n 21,\n \
\ 2748,\n 2663,\n 26,\n 33,\n 866,\n 597,\n\
\ 28,\n 4245,\n 16721,\n 6178,\n 103,\n \
\ 9959,\n 13,\n 15,\n 19,\n 14,\n 17765,\n\
\ 231,\n 250,\n 16,\n 4717,\n 18,\n \
\ 13,\n 9,\n 9,\n 16721,\n 6178,\n 103,\n \
\ 9959,\n 9,\n 16721,\n 6178,\n 103,\n \
\ 9959,\n 9,\n 17765,\n 9,\n 17765,\n 9,\n\
\ 250,\n 16,\n 4717,\n 18,\n 9,\n 250,\n\
\ 16,\n 4717,\n 18,\n 9,\n 19,\n 814,\n\
\ 13,\n 15,\n 24,\n 178,\n 53,\n 16,\n\
\ 14,\n 1554,\n 2192,\n 4977,\n 27,\n \
\ 633,\n 17,\n 1931,\n 6926,\n 172,\n 507,\n\
\ 416,\n 942,\n 16,\n 250,\n 16,\n 4717,\n\
\ 18,\n 13,\n 9,\n 9,\n 250,\n 16,\n\
\ 4717,\n 18,\n 9,\n 250,\n 16,\n 4717,\n\
\ 18,\n 9,\n 1554,\n 2192,\n 4977,\n \
\ 27,\n 633,\n 9,\n 968,\n 16,\n 1554,\n \
\ 2192,\n 189,\n 633,\n 1610,\n 3,\n \
\ 14569,\n 728,\n 1516,\n 106,\n 8,\n 6501,\n\
\ 1346,\n 577,\n 29,\n 14,\n 2385,\n \
\ 3647,\n 237,\n 9,\n 3\n ],\n \"token_type_ids\"\
: [\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1\n ],\n \"attention_mask\"\
: [\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1\n ]\n },\n \"\
truncated_cells\": []\n },\n {\n \"row_idx\": 1,\n \"row\": {\n \"\
claim\": \"Nikolaj Coster-Waldau worked with the Fox Broadcasting Company.\",\n\
\ \"evidence\": \"Majboor -LRB- Hindi : \\u092e\\u091c\\u092c\\u0930 , English\
\ : Compulsed -RRB- is a 1974 Indian Hindi crime-thriller film directed by Ravi\
\ Tandon .. Ravi Tandon. Ravi Tandon. Hindi. Hindi. crime. crime film. thriller\
\ film. thriller film. Music is by Laxmikant Pyarelal and lyrics by Anand Bakshi\
\ .. Laxmikant Pyarelal. Laxmikant Pyarelal. Anand Bakshi. Anand Bakshi. The film\
\ was written by Salim-Javed .. Salim-Javed. Salim-Javed. The movie stars Amitabh\
\ Bachchan , Parveen Babi , Pran , Madan Puri , Rehman and Farida Jalal .. Amitabh\
\ Bachchan. Amitabh Bachchan. Parveen Babi. Parveen Babi. Pran. Pran ( actor ).\
\ Farida Jalal. Farida Jalal. Madan Puri. Madan Puri. Rehman. Rehman ( actor ).\
\ It is a remake of an American film titled Zig Zag -LRB- 1970 film -RRB- starring\
\ George Kennedy The film was later remade in Telugu by director K. Raghavendra\
\ Rao as Raja -LRB- 1976 -RRB- starring Shobhan Babu and Jayasudha .. George Kennedy.\
\ George Kennedy. Telugu. Telugu language. K. Raghavendra Rao. K. Raghavendra\
\ Rao. Raja. Raja ( 1976 film ). Shobhan Babu. Shobhan Babu. Jayasudha. Jayasudha.\
\ It was also remade in Tamil as Naan Vazhavaippen -LRB- 1979 -RRB- with Sivaji\
\ Ganeshan and K. R. Vijaya , and in Gujarati as Naseeb No Khel -LRB- 1982 -RRB-\
\ and also in Malayalam as Ee Kaikalil -LRB- 1986 -RRB- .. Tamil. Tamil language.\
\ Naan Vazhavaippen. Naan Vazhavaippen. Sivaji Ganeshan. Sivaji Ganeshan. K. R.\
\ Vijaya. K. R. Vijaya. Gujarati. Gujarati language. Ee Kaikalil. Ee Kaikalil\"\
,\n \"labels\": 0,\n \"input_ids\": [\n 2,\n 7083,\n \
\ 1192,\n 248,\n 13,\n 8,\n 255,\n 7549,\n\
\ 8,\n 10748,\n 13,\n 45,\n 13,\n 1,\n\
\ 13,\n 15,\n 486,\n 13,\n 45,\n 13,\n\
\ 960,\n 26129,\n 43,\n 13,\n 8,\n 139,\n\
\ 7549,\n 8,\n 25,\n 21,\n 2309,\n 909,\n\
\ 10748,\n 2237,\n 8,\n 96,\n 10568,\n \
\ 106,\n 171,\n 1012,\n 34,\n 13251,\n 3461,\n\
\ 804,\n 13,\n 9,\n 9,\n 13251,\n 3461,\n\
\ 804,\n 9,\n 13251,\n 3461,\n 804,\n \
\ 9,\n 10748,\n 9,\n 10748,\n 9,\n 2237,\n\
\ 9,\n 2237,\n 171,\n 9,\n 11892,\n \
\ 171,\n 9,\n 11892,\n 171,\n 9,\n 232,\n \
\ 25,\n 34,\n 333,\n 396,\n 11753,\n \
\ 1830,\n 7103,\n 1509,\n 10834,\n 17,\n 3114,\n\
\ 34,\n 40,\n 290,\n 334,\n 23207,\n \
\ 13,\n 9,\n 9,\n 333,\n 396,\n 11753,\n \
\ 1830,\n 7103,\n 1509,\n 10834,\n 9,\n \
\ 333,\n 396,\n 11753,\n 1830,\n 7103,\n \
\ 1509,\n 10834,\n 9,\n 40,\n 290,\n 334,\n\
\ 23207,\n 9,\n 40,\n 290,\n 334,\n \
\ 23207,\n 9,\n 14,\n 171,\n 23,\n 642,\n \
\ 34,\n 26036,\n 8,\n 1004,\n 4763,\n \
\ 13,\n 9,\n 9,\n 26036,\n 8,\n 1004,\n \
\ 4763,\n 9,\n 26036,\n 8,\n 1004,\n 4763,\n\
\ 9,\n 14,\n 1308,\n 1610,\n 21,\n 16803,\n\
\ 6216,\n 8347,\n 3606,\n 13,\n 15,\n \
\ 22259,\n 7500,\n 8514,\n 49,\n 13,\n 15,\n\
\ 3865,\n 103,\n 13,\n 15,\n 2073,\n \
\ 210,\n 13,\n 15806,\n 13,\n 15,\n 302,\n\
\ 19140,\n 17,\n 463,\n 3405,\n 27937,\n \
\ 13,\n 9,\n 9,\n 21,\n 16803,\n 6216,\n\
\ 8347,\n 3606,\n 9,\n 21,\n 16803,\n \
\ 6216,\n 8347,\n 3606,\n 9,\n 22259,\n 7500,\n\
\ 8514,\n 49,\n 9,\n 22259,\n 7500,\n \
\ 8514,\n 49,\n 9,\n 3865,\n 103,\n 9,\n\
\ 3865,\n 103,\n 13,\n 5,\n 1574,\n \
\ 13,\n 6,\n 9,\n 463,\n 3405,\n 27937,\n \
\ 9,\n 463,\n 3405,\n 27937,\n 9,\n 2073,\n\
\ 210,\n 13,\n 15806,\n 9,\n 2073,\n \
\ 210,\n 13,\n 15806,\n 9,\n 302,\n 19140,\n\
\ 9,\n 302,\n 19140,\n 13,\n 5,\n 1574,\n\
\ 13,\n 6,\n 9,\n 32,\n 25,\n 21,\n\
\ 12783,\n 16,\n 40,\n 189,\n 171,\n \
\ 3008,\n 13,\n 16594,\n 13,\n 23189,\n 13,\n\
\ 8,\n 255,\n 7549,\n 8,\n 1133,\n 171,\n\
\ 13,\n 8,\n 139,\n 7549,\n 8,\n 3154,\n\
\ 665,\n 4586,\n 14,\n 171,\n 23,\n \
\ 138,\n 27115,\n 19,\n 13867,\n 34,\n 559,\n\
\ 680,\n 9,\n 6704,\n 9671,\n 3030,\n \
\ 10440,\n 28,\n 6835,\n 13,\n 8,\n 255,\n\
\ 7549,\n 8,\n 2325,\n 13,\n 8,\n 139,\n\
\ 7549,\n 8,\n 3154,\n 5799,\n 220,\n \
\ 1841,\n 19641,\n 17,\n 13589,\n 1666,\n \
\ 7977,\n 13,\n 9,\n 9,\n 665,\n 4586,\n \
\ 9,\n 665,\n 4586,\n 9,\n 13867,\n 9,\n\
\ 13867,\n 816,\n 9,\n 680,\n 9,\n 6704,\n\
\ 9671,\n 3030,\n 10440,\n 9,\n 680,\n \
\ 9,\n 6704,\n 9671,\n 3030,\n 10440,\n \
\ 9,\n 6835,\n 9,\n 6835,\n 13,\n 5,\n \
\ 2325,\n 171,\n 13,\n 6,\n 9,\n 5799,\n\
\ 220,\n 1841,\n 19641,\n 9,\n 5799,\n \
\ 220,\n 1841,\n 19641,\n 9,\n 13589,\n \
\ 1666,\n 7977,\n 9,\n 13589,\n 1666,\n 7977,\n\
\ 9,\n 32,\n 23,\n 67,\n 27115,\n 19,\n\
\ 4959,\n 28,\n 1775,\n 210,\n 13,\n \
\ 24319,\n 20648,\n 19128,\n 219,\n 13,\n \
\ 8,\n 255,\n 7549,\n 8,\n 2188,\n 13,\n \
\ 8,\n 139,\n 7549,\n 8,\n 29,\n 16646,\n\
\ 1616,\n 24150,\n 210,\n 17,\n 680,\n \
\ 9,\n 761,\n 9,\n 18522,\n 13,\n 15,\n\
\ 17,\n 19,\n 12895,\n 49,\n 28,\n 1775,\n\
\ 1798,\n 220,\n 90,\n 13,\n 28798,\n \
\ 13,\n 8,\n 255,\n 7549,\n 8,\n 2136,\n\
\ 13,\n 8,\n 139,\n 7549,\n 8,\n 17,\n\
\ 67,\n 19,\n 16125,\n 28,\n 13,\n 2851,\n\
\ 5775,\n 6766,\n 947,\n 13,\n 8,\n \
\ 255,\n 7549,\n 8,\n 1901,\n 13,\n 8,\n \
\ 139,\n 7549,\n 8,\n 13,\n 9,\n 9,\n\
\ 4959,\n 9,\n 4959,\n 816,\n 9,\n 1775,\n\
\ 210,\n 13,\n 24319,\n 20648,\n 19128,\n \
\ 219,\n 9,\n 1775,\n 210,\n 13,\n 24319,\n\
\ 20648,\n 19128,\n 219,\n 9,\n 16646,\n \
\ 1616,\n 24150,\n 210,\n 9,\n 16646,\n \
\ 1616,\n 24150,\n 210,\n 9,\n 680,\n 9,\n\
\ 761,\n 9,\n 18522,\n 9,\n 680,\n 9,\n\
\ 761,\n 9,\n 18522,\n 9,\n 12895,\n \
\ 49,\n 9,\n 12895,\n 49,\n 816,\n 9,\n \
\ 13,\n 2851,\n 5775,\n 6766,\n 947,\n \
\ 9,\n 3,\n 14569,\n 728,\n 1516,\n 106,\n\
\ 8,\n 6501,\n 1346,\n 577,\n 29,\n \
\ 14,\n 2385,\n 3647,\n 237,\n 9,\n 3\n \
\ ],\n \"token_type_ids\": [\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n\
\ 0,\n 0,\n 0,\n 0,\n 0,\n 0,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1\n ],\n \"attention_mask\": [\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n\
\ 1,\n 1,\n 1,\n 1,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 1\n ]\n },\n \"truncated_cells\"\
: []\n }\n]"
- "DATASET_NAME: \"nikchar/20k_claims_train_final\"\nFEATURES: {'claim': {'dtype':\
\ 'string', '_type': 'Value'}, 'text': {'dtype': 'string', '_type': 'Value'},\
\ 'label': {'dtype': 'int64', '_type': 'Value'}}\nDATA SAMPLE:\n[\n {\n \"\
row_idx\": 0,\n \"row\": {\n \"claim\": \"Billy Bob Thornton is divorced\
\ from Angelina Jolie.\",\n \"text\": \"Angelina Jolie Pitt -LRB- -LSB- d\\\
u0292o\\u028a\\u02c8li\\u02d0 -RSB- ; n\\u00e9e Voight ; born June 4 , 1975 -RRB-\
\ is an American actress , filmmaker , philanthropist and humanitarian . She has\
\ received an Academy Award , two Screen Actors Guild Awards , and three Golden\
\ Globe Awards , and has been cited as Hollywood 's highest-paid actress . Jolie\
\ made her screen debut as a child alongside her father , Jon Voight , in Lookin\
\ ' to Get Out -LRB- 1982 -RRB- . Her film career began in earnest a decade later\
\ with the low-budget production Cyborg 2 -LRB- 1993 -RRB- , followed by her first\
\ leading role in a major film , Hackers -LRB- 1995 -RRB- . She starred in the\
\ critically acclaimed biographical cable films George Wallace -LRB- 1997 -RRB-\
\ and Gia -LRB- 1998 -RRB- , and won an Academy Award for Best Supporting Actress\
\ for her performance in the drama Girl , Interrupted -LRB- 1999 -RRB- . Jolie\
\ 's starring role as the video game heroine Lara Croft in Lara Croft : Tomb Raider\
\ -LRB- 2001 -RRB- established her as a leading Hollywood actress . She continued\
\ her successful action-star career with Mr. & Mrs. Smith -LRB- 2005 -RRB- , Wanted\
\ -LRB- 2008 -RRB- , and Salt -LRB- 2010 -RRB- , and received critical acclaim\
\ for her performances in the dramas A Mighty Heart -LRB- 2007 -RRB- and Changeling\
\ -LRB- 2008 -RRB- , which earned her a nomination for an Academy Award for Best\
\ Actress . Beginning in the 2010s , she expanded her career into directing ,\
\ screenwriting , and producing , starting with the wartime dramas In the Land\
\ of Blood and Honey -LRB- 2011 -RRB- and Unbroken -LRB- 2014 -RRB- . Her biggest\
\ commercial success came with the fantasy picture Maleficent -LRB- 2014 -RRB-\
\ . In addition to her film career , Jolie is noted for her humanitarian efforts\
\ , for which she has received a Jean Hersholt Humanitarian Award and an honorary\
\ damehood of the Order of St Michael and St George -LRB- DCMG -RRB- , among other\
\ honors . She promotes various causes , including conservation , education ,\
\ and women 's rights , and is most noted for her advocacy on behalf of refugees\
\ as a Special Envoy for the United Nations High Commissioner for Refugees -LRB-\
\ UNHCR -RRB- . As a public figure , Jolie has been cited as one of the most influential\
\ and powerful people in the American entertainment industry , as well as the\
\ world 's most beautiful woman , by various media outlets . Her personal life\
\ is the subject of wide publicity . Divorced from actors Jonny Lee Miller and\
\ Billy Bob Thornton , she separated from her third husband , actor Brad Pitt\
\ , in September 2016 . They have six children together , three of whom were adopted\
\ internationally . \",\n \"label\": 2\n },\n \"truncated_cells\":\
\ []\n },\n {\n \"row_idx\": 1,\n \"row\": {\n \"claim\": \"John\
\ McCain has chaired the House Commerce Committee.\",\n \"text\": \"John\
\ Sidney McCain III -LRB- born August 29 , 1936 -RRB- is an American politician\
\ who currently serves as the senior United States Senator from Arizona . He was\
\ the Republican nominee for the 2008 U.S. presidential election . McCain followed\
\ his father and grandfather , both four-star admirals , into the United States\
\ Navy , graduating from the U.S. Naval Academy in 1958 . He became a naval aviator\
\ , flying ground-attack aircraft from aircraft carriers . During the Vietnam\
\ War , he was almost killed in the 1967 USS Forrestal fire . In October 1967\
\ , while on a bombing mission over Hanoi , he was shot down , seriously injured\
\ , and captured by the North Vietnamese . He was a prisoner of war until 1973\
\ . McCain experienced episodes of torture , and refused an out-of-sequence early\
\ repatriation offer . His war wounds have left him with lifelong physical limitations\
\ . He retired from the Navy as a captain in 1981 and moved to Arizona , where\
\ he entered politics . Elected to the U.S. House of Representatives in 1982 ,\
\ McCain served two terms . He was first elected to the U.S. Senate in 1986 ,\
\ winning re-election easily five times , most recently in 2016 . While generally\
\ adhering to conservative principles , McCain at times has had a media reputation\
\ as a `` maverick '' for his willingness to disagree with his party on certain\
\ issues . After being investigated and largely exonerated in a political influence\
\ scandal of the 1980s as a member of the Keating Five , he made campaign finance\
\ reform one of his signature concerns , eventually gaining passage of the McCain\
\ -- Feingold Act in 2002 . He is also known for his work in the 1990s to restore\
\ diplomatic relations with Vietnam , and for his belief that the Iraq War should\
\ have been fought to a successful conclusion . McCain has chaired the Senate\
\ Commerce Committee , opposed spending that he considered to be pork barrel ,\
\ and played a key role in alleviating a crisis over judicial nominations with\
\ the bi-partisan group known as the Gang of 14 . McCain ran for the Republican\
\ nomination in 2000 but lost a heated primary season contest to George W. Bush\
\ of Texas . He secured the nomination in 2008 after coming back from early reversals\
\ , but was defeated by Democratic candidate Barack Obama in the general election\
\ , losing by a 365 -- 173 electoral college margin and by 53 -- 46 % in the popular\
\ vote . He subsequently adopted more orthodox conservative stances and attitudes\
\ and largely opposed actions of the Obama administration , especially in regard\
\ to foreign policy matters . By 2013 , however , he had become a key figure in\
\ the Senate for negotiating deals on certain issues in an otherwise partisan\
\ environment . In 2015 , McCain became chairman of the Senate Armed Services\
\ Committee . \",\n \"label\": 0\n },\n \"truncated_cells\": []\n \
\ }\n]"
- "DATASET_NAME: \"thepurpleowl/codequeries\"\nFEATURES: {'query_name': {'dtype':\
\ 'string', '_type': 'Value'}, 'code_file_path': {'dtype': 'string', '_type':\
\ 'Value'}, 'context_blocks': [{'content': {'dtype': 'string', '_type': 'Value'},\
\ 'metadata': {'dtype': 'string', '_type': 'Value'}, 'header': {'dtype': 'string',\
\ '_type': 'Value'}, 'index': {'dtype': 'int32', '_type': 'Value'}}], 'answer_spans':\
\ [{'span': {'dtype': 'string', '_type': 'Value'}, 'start_line': {'dtype': 'int32',\
\ '_type': 'Value'}, 'start_column': {'dtype': 'int32', '_type': 'Value'}, 'end_line':\
\ {'dtype': 'int32', '_type': 'Value'}, 'end_column': {'dtype': 'int32', '_type':\
\ 'Value'}}], 'supporting_fact_spans': [{'span': {'dtype': 'string', '_type':\
\ 'Value'}, 'start_line': {'dtype': 'int32', '_type': 'Value'}, 'start_column':\
\ {'dtype': 'int32', '_type': 'Value'}, 'end_line': {'dtype': 'int32', '_type':\
\ 'Value'}, 'end_column': {'dtype': 'int32', '_type': 'Value'}}], 'example_type':\
\ {'dtype': 'int8', '_type': 'Value'}, 'single_hop': {'dtype': 'bool', '_type':\
\ 'Value'}, 'subtokenized_input_sequence': {'feature': {'dtype': 'string', '_type':\
\ 'Value'}, '_type': 'Sequence'}, 'label_sequence': {'feature': {'dtype': 'int8',\
\ '_type': 'Value'}, '_type': 'Sequence'}}\nDATA SAMPLE:\n[\n {\n \"row_idx\"\
: 0,\n \"row\": {\n \"query_name\": \"Unused import\",\n \"code_file_path\"\
: \"rcbops/glance-buildpackage/glance/tests/unit/test_db.py\",\n \"context_blocks\"\
: [\n {\n \"content\": \"# vim: tabstop=4 shiftwidth=4 softtabstop=4\\\
n\\n# Copyright 2010-2011 OpenStack, LLC\\n# All Rights Reserved.\\n#\\n# Licensed\
\ under the Apache License, Version 2.0 (the \\\"License\\\"); you may\\n# \
\ not use this file except in compliance with the License. You may obtain\\n#\
\ a copy of the License at\\n#\\n# http://www.apache.org/licenses/LICENSE-2.0\\\
n#\\n# Unless required by applicable law or agreed to in writing, software\\\
n# distributed under the License is distributed on an \\\"AS IS\\\" BASIS,\
\ WITHOUT\\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\
\ See the\\n# License for the specific language governing permissions and limitations\\\
n# under the License.\\n\\nimport datetime\\n\\nfrom glance.common import context\\\
nfrom glance.common import exception\\nfrom glance.common import utils\\nfrom\
\ glance.registry import context as rcontext\\nfrom glance.registry.db import\
\ api as db_api\\nfrom glance.registry.db import models as db_models\\nfrom glance.tests.unit\
\ import base\\nfrom glance.tests import utils as test_utils\\n\\n\\n_gen_uuid\
\ = utils.generate_uuid\\n\\nUUID1 = _gen_uuid()\\nUUID2 = _gen_uuid()\\n\\n\\\
nCONF = {'sql_connection': 'sqlite://',\\n 'verbose': False,\\n \
\ 'debug': False}\\n\\nFIXTURES = [\\n {'id': UUID1,\\n 'name': 'fake image\
\ #1',\\n 'status': 'active',\\n 'disk_format': 'ami',\\n 'container_format':\
\ 'ami',\\n 'is_public': False,\\n 'created_at': datetime.datetime.utcnow(),\\\
n 'updated_at': datetime.datetime.utcnow(),\\n 'deleted_at': None,\\n\
\ 'deleted': False,\\n 'checksum': None,\\n 'min_disk': 0,\\n \
\ 'min_ram': 0,\\n 'size': 13,\\n 'location': \\\"swift://user:passwd@acct/container/obj.tar.0\\\
\",\\n 'properties': {'type': 'kernel'}},\\n {'id': UUID2,\\n 'name':\
\ 'fake image #2',\\n 'status': 'active',\\n 'disk_format': 'vhd',\\n\
\ 'container_format': 'ovf',\\n 'is_public': True,\\n 'created_at':\
\ datetime.datetime.utcnow(),\\n 'updated_at': datetime.datetime.utcnow(),\\\
n 'deleted_at': None,\\n 'deleted': False,\\n 'checksum': None,\\\
n 'min_disk': 5,\\n 'min_ram': 256,\\n 'size': 19,\\n 'location':\
\ \\\"file:///tmp/glance-tests/2\\\",\\n 'properties': {}}]\\n\\n\\n\",\n\
\ \"metadata\": \"root\",\n \"header\": \"['module', '___EOS___']\"\
,\n \"index\": 0\n },\n {\n \"content\": \"class\
\ TestRegistryDb(base.IsolatedUnitTest):\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\"\
,\n \"metadata\": \"root.TestRegistryDb\",\n \"header\": \"\
['module', '___EOS___']\",\n \"index\": 74\n },\n {\n \
\ \"content\": \" def setUp(self):\\n \\\"\\\"\\\"Establish\
\ a clean test environment\\\"\\\"\\\"\\n super(TestRegistryDb, self).setUp()\\\
n conf = test_utils.TestConfigOpts(CONF)\\n self.adm_context = rcontext.RequestContext(is_admin=True)\\\
n self.context = rcontext.RequestContext(is_admin=False)\\n db_api.configure_db(conf)\\\
n self.destroy_fixtures()\\n self.create_fixtures()\",\n \
\ \"metadata\": \"root.TestRegistryDb.setUp\",\n \"header\": \"['class',\
\ 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest', ')', ':', '___EOS___']\"\
,\n \"index\": 76\n },\n {\n \"content\": \" \
\ def create_fixtures(self):\\n for fixture in FIXTURES:\\n \
\ db_api.image_create(self.adm_context, fixture)\",\n \"metadata\"\
: \"root.TestRegistryDb.create_fixtures\",\n \"header\": \"['class',\
\ 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest', ')', ':', '___EOS___']\"\
,\n \"index\": 86\n },\n {\n \"content\": \" \
\ def destroy_fixtures(self):\\n # Easiest to just drop the models and\
\ re-create them...\\n db_models.unregister_models(db_api._ENGINE)\\n \
\ db_models.register_models(db_api._ENGINE)\",\n \"metadata\":\
\ \"root.TestRegistryDb.destroy_fixtures\",\n \"header\": \"['class',\
\ 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest', ')', ':', '___EOS___']\"\
,\n \"index\": 90\n },\n {\n \"content\": \" \
\ def test_image_get(self):\\n image = db_api.image_get(self.context,\
\ UUID1)\\n self.assertEquals(image['id'], FIXTURES[0]['id'])\",\n \
\ \"metadata\": \"root.TestRegistryDb.test_image_get\",\n \"header\"\
: \"['class', 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest', ')', ':',\
\ '___EOS___']\",\n \"index\": 95\n },\n {\n \"\
content\": \" def test_image_get_disallow_deleted(self):\\n db_api.image_destroy(self.adm_context,\
\ UUID1)\\n self.assertRaises(exception.NotFound, db_api.image_get,\\n\
\ self.context, UUID1)\",\n \"metadata\": \"\
root.TestRegistryDb.test_image_get_disallow_deleted\",\n \"header\":\
\ \"['class', 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest', ')', ':',\
\ '___EOS___']\",\n \"index\": 99\n },\n {\n \"\
content\": \" def test_image_get_allow_deleted(self):\\n db_api.image_destroy(self.adm_context,\
\ UUID1)\\n image = db_api.image_get(self.adm_context, UUID1)\\n \
\ self.assertEquals(image['id'], FIXTURES[0]['id'])\",\n \"metadata\"\
: \"root.TestRegistryDb.test_image_get_allow_deleted\",\n \"header\"\
: \"['class', 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest', ')', ':',\
\ '___EOS___']\",\n \"index\": 104\n },\n {\n \
\ \"content\": \" def test_image_get_force_allow_deleted(self):\\n db_api.image_destroy(self.adm_context,\
\ UUID1)\\n image = db_api.image_get(self.context, UUID1, force_show_deleted=True)\\\
n self.assertEquals(image['id'], FIXTURES[0]['id'])\",\n \"metadata\"\
: \"root.TestRegistryDb.test_image_get_force_allow_deleted\",\n \"header\"\
: \"['class', 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest', ')', ':',\
\ '___EOS___']\",\n \"index\": 109\n },\n {\n \
\ \"content\": \" def test_image_get_all(self):\\n images = db_api.image_get_all(self.context)\\\
n self.assertEquals(len(images), 2)\",\n \"metadata\": \"root.TestRegistryDb.test_image_get_all\"\
,\n \"header\": \"['class', 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest',\
\ ')', ':', '___EOS___']\",\n \"index\": 114\n },\n {\n\
\ \"content\": \" def test_image_get_all_marker(self):\\n images\
\ = db_api.image_get_all(self.context, marker=UUID2)\\n self.assertEquals(len(images),\
\ 1)\",\n \"metadata\": \"root.TestRegistryDb.test_image_get_all_marker\"\
,\n \"header\": \"['class', 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest',\
\ ')', ':', '___EOS___']\",\n \"index\": 118\n },\n {\n\
\ \"content\": \" def test_image_get_all_marker_deleted(self):\\n\
\ \\\"\\\"\\\"Cannot specify a deleted image as a marker.\\\"\\\"\\\"\\\
n db_api.image_destroy(self.adm_context, UUID1)\\n filters = {'deleted':\
\ False}\\n self.assertRaises(exception.NotFound, db_api.image_get_all,\\\
n self.context, marker=UUID1, filters=filters)\",\n \
\ \"metadata\": \"root.TestRegistryDb.test_image_get_all_marker_deleted\"\
,\n \"header\": \"['class', 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest',\
\ ')', ':', '___EOS___']\",\n \"index\": 122\n },\n {\n\
\ \"content\": \" def test_image_get_all_marker_deleted_showing_deleted_as_admin(self):\\\
n \\\"\\\"\\\"Specify a deleted image as a marker if showing deleted images.\\\
\"\\\"\\\"\\n db_api.image_destroy(self.adm_context, UUID1)\\n images\
\ = db_api.image_get_all(self.adm_context, marker=UUID1)\\n self.assertEquals(len(images),\
\ 0)\",\n \"metadata\": \"root.TestRegistryDb.test_image_get_all_marker_deleted_showing_deleted_as_admin\"\
,\n \"header\": \"['class', 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest',\
\ ')', ':', '___EOS___']\",\n \"index\": 129\n },\n {\n\
\ \"content\": \" def test_image_get_all_marker_deleted_showing_deleted(self):\\\
n \\\"\\\"\\\"Specify a deleted image as a marker if showing deleted images.\\\
\"\\\"\\\"\\n db_api.image_destroy(self.adm_context, UUID1)\\n filters\
\ = {'deleted': True}\\n images = db_api.image_get_all(self.context, marker=UUID1,\\\
n filters=filters)\\n self.assertEquals(len(images),\
\ 0)\",\n \"metadata\": \"root.TestRegistryDb.test_image_get_all_marker_deleted_showing_deleted\"\
,\n \"header\": \"['class', 'TestRegistryDb', '(', 'base', '.', 'IsolatedUnitTest',\
\ ')', ':', '___EOS___']\",\n \"index\": 135\n }\n ],\n \
\ \"answer_spans\": [\n {\n \"span\": \"from glance.common\
\ import context\",\n \"start_line\": 19,\n \"start_column\"\
: 0,\n \"end_line\": 19,\n \"end_column\": 33\n }\n \
\ ],\n \"supporting_fact_spans\": [],\n \"example_type\": 1,\n \
\ \"single_hop\": false,\n \"subtokenized_input_sequence\": [\n \
\ \"[CLS]_\",\n \"Un\",\n \"used_\",\n \"import_\",\n \
\ \"[SEP]_\",\n \"module_\",\n \"\\\\u\\\\u\\\\uEOS\\\\u\\\
\\u\\\\u_\",\n \"#\",\n \" \",\n \"vim\",\n \":\"\
,\n \" \",\n \"tabs\",\n \"top\",\n \"=\",\n \
\ \"4\",\n \" \",\n \"shift\",\n \"widt\",\n \"\
h\",\n \"=\",\n \"4\",\n \" \",\n \"soft\",\n \
\ \"tabs\",\n \"top\",\n \"=\",\n \"4_\",\n \"\\\
\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\\
u_\",\n \"#\",\n \" \",\n \"Copy\",\n \"right\",\n\
\ \" \",\n \"2010\",\n \"-\",\n \"2011\",\n \
\ \" \",\n \"Open\",\n \"Stack\",\n \",\",\n \" \"\
,\n \"LLC\",\n \"_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\\
u_\",\n \"#\",\n \" \",\n \"All\",\n \" \",\n \
\ \"Rig\",\n \"hts\",\n \" \",\n \"Reserve\",\n \
\ \"d\",\n \"._\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \
\ \"#\",\n \"_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"#\",\n \" \",\n \" \",\n \" \",\n \"\
\ \",\n \"License\",\n \"d\",\n \" \",\n \"under\"\
,\n \" \",\n \"the\",\n \" \",\n \"Ap\",\n \
\ \"ache\",\n \" \",\n \"License\",\n \",\",\n \"\
\ \",\n \"Version\",\n \" \",\n \"2.0\",\n \" \",\n\
\ \"(\",\n \"the\",\n \" \",\n \"\\\"\",\n \
\ \"License\",\n \"\\\");\",\n \" \",\n \"you\",\n \
\ \" \",\n \"may\",\n \"_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\
\\u\\\\u_\",\n \"#\",\n \" \",\n \" \",\n \" \",\n\
\ \" \",\n \"not\",\n \" \",\n \"use\",\n \"\
\ \",\n \"this\",\n \" \",\n \"file\",\n \" \",\n\
\ \"except\",\n \" \",\n \"in\",\n \" \",\n \
\ \"compli\",\n \"anc\",\n \"e\",\n \" \",\n \"with\"\
,\n \" \",\n \"the\",\n \" \",\n \"License\",\n \
\ \".\",\n \" \",\n \"You\",\n \" \",\n \"may\"\
,\n \" \",\n \"obtain\",\n \"_\",\n \"\\\\u\\\\u\\\
\\uNL\\\\u\\\\u\\\\u_\",\n \"#\",\n \" \",\n \" \",\n \
\ \" \",\n \" \",\n \"a\",\n \" \",\n \"copy\"\
,\n \" \",\n \"of\",\n \" \",\n \"the\",\n \
\ \" \",\n \"License\",\n \" \",\n \"at_\",\n \"\\\
\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"#\",\n \"_\",\n \"\
\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"#\",\n \" \",\n \
\ \" \",\n \" \",\n \" \",\n \" \",\n \"http\"\
,\n \"://\",\n \"www\",\n \".\",\n \"apa\",\n \
\ \"che\",\n \".\",\n \"org\",\n \"/\",\n \"license\"\
,\n \"s\",\n \"/\",\n \"LICENSE\",\n \"-\",\n \
\ \"2.0_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"#\"\
,\n \"_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\
#\",\n \" \",\n \" \",\n \" \",\n \" \",\n \
\ \"Un\",\n \"less\",\n \" \",\n \"require\",\n \"\
d\",\n \" \",\n \"by\",\n \" \",\n \"applica\",\n\
\ \"ble\",\n \" \",\n \"law\",\n \" \",\n \"\
or\",\n \" \",\n \"agree\",\n \"d\",\n \" \",\n \
\ \"to\",\n \" \",\n \"in\",\n \" \",\n \"writ\"\
,\n \"ing\",\n \",\",\n \" \",\n \"software\",\n \
\ \"_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"#\"\
,\n \" \",\n \" \",\n \" \",\n \" \",\n \"\
distributed\",\n \" \",\n \"under\",\n \" \",\n \"\
the\",\n \" \",\n \"License\",\n \" \",\n \"is\",\n\
\ \" \",\n \"distributed\",\n \" \",\n \"on\",\n \
\ \" \",\n \"an\",\n \" \",\n \"\\\"\",\n \"\
AS\",\n \" \",\n \"IS\",\n \"\\\"\",\n \" \",\n \
\ \"BAS\",\n \"IS\",\n \",\",\n \" \",\n \"WITH\"\
,\n \"OUT_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \
\ \"#\",\n \" \",\n \" \",\n \" \",\n \" \",\n \
\ \"WAR\",\n \"RAN\",\n \"TIES\",\n \" \",\n \"\
OR\",\n \" \",\n \"CONDITION\",\n \"S\",\n \" \",\n\
\ \"OF\",\n \" \",\n \"ANY\",\n \" \",\n \"\
KIND\",\n \",\",\n \" \",\n \"eit\",\n \"her\",\n\
\ \" \",\n \"express\",\n \" \",\n \"or\",\n \
\ \" \",\n \"impli\",\n \"ed\",\n \".\",\n \" \"\
,\n \"See\",\n \" \",\n \"the_\",\n \"\\\\u\\\\u\\\
\\uNL\\\\u\\\\u\\\\u_\",\n \"#\",\n \" \",\n \" \",\n \
\ \" \",\n \" \",\n \"License\",\n \" \",\n \"\
for\",\n \" \",\n \"the\",\n \" \",\n \"specific\"\
,\n \" \",\n \"language\",\n \" \",\n \"govern\",\n\
\ \"ing\",\n \" \",\n \"permissi\",\n \"ons\",\n \
\ \" \",\n \"and\",\n \" \",\n \"limit\",\n \
\ \"ations_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"#\"\
,\n \" \",\n \" \",\n \" \",\n \" \",\n \"\
under\",\n \" \",\n \"the\",\n \" \",\n \"License\"\
,\n \"._\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\
\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"import_\",\n \"datetime_\"\
,\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\
\\uNL\\\\u\\\\u\\\\u_\",\n \"from_\",\n \"glance_\",\n \"\
._\",\n \"common_\",\n \"import_\",\n \"context_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"from_\",\n \
\ \"glance_\",\n \"._\",\n \"common_\",\n \"import_\",\n\
\ \"exception_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n\
\ \"from_\",\n \"glance_\",\n \"._\",\n \"common_\"\
,\n \"import_\",\n \"utils_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\
\\u\\\\u\\\\u_\",\n \"from_\",\n \"glance_\",\n \"._\",\n\
\ \"registry_\",\n \"import_\",\n \"context_\",\n \
\ \"as_\",\n \"rcon\",\n \"text_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\
\\u\\\\u\\\\u_\",\n \"from_\",\n \"glance_\",\n \"._\",\n\
\ \"registry_\",\n \"._\",\n \"db_\",\n \"import_\"\
,\n \"api_\",\n \"as_\",\n \"db\",\n \"\\\\u\",\n\
\ \"api_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \
\ \"from_\",\n \"glance_\",\n \"._\",\n \"registry_\"\
,\n \"._\",\n \"db_\",\n \"import_\",\n \"models_\"\
,\n \"as_\",\n \"db\",\n \"\\\\u\",\n \"models_\"\
,\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"from_\",\n \
\ \"glance_\",\n \"._\",\n \"tests_\",\n \"._\",\n\
\ \"unit_\",\n \"import_\",\n \"base_\",\n \"\\\\\
u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"from_\",\n \"glance_\"\
,\n \"._\",\n \"tests_\",\n \"import_\",\n \"utils_\"\
,\n \"as_\",\n \"test\\\\u\",\n \"utils_\",\n \"\\\
\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\\
u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\"\
,\n \"gen\",\n \"\\\\u\",\n \"uuid_\",\n \"=_\",\n\
\ \"utils_\",\n \"._\",\n \"generat\",\n \"e\\\\u\"\
,\n \"uuid_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"UU\",\n \"ID\"\
,\n \"1_\",\n \"=_\",\n \"\\\\u\",\n \"gen\",\n \
\ \"\\\\u\",\n \"uuid_\",\n \"(_\",\n \")_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"UU\",\n \"\
ID\",\n \"2_\",\n \"=_\",\n \"\\\\u\",\n \"gen\",\n\
\ \"\\\\u\",\n \"uuid_\",\n \"(_\",\n \")_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\
CONF_\",\n \"=_\",\n \"{_\",\n \"'\",\n \"sql\",\n\
\ \"\\\\u\",\n \"connecti\",\n \"on\",\n \"'_\",\n\
\ \":_\",\n \"'\",\n \"sql\",\n \"ite\",\n \
\ \"://'_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n\
\ \"'\",\n \"verbo\",\n \"se\",\n \"'_\",\n \
\ \":_\",\n \"False_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\
\\u\\\\u\\\\u_\",\n \"'\",\n \"debug\",\n \"'_\",\n \
\ \":_\",\n \"False_\",\n \"}_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\
FIXTURE\",\n \"S_\",\n \"=_\",\n \"[_\",\n \"\\\\\
u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"{_\",\n \"'\",\n \"\
id\",\n \"'_\",\n \":_\",\n \"UU\",\n \"ID\",\n \
\ \"1_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"'\",\n \"name\",\n \"'_\",\n \":_\",\n \
\ \"'\",\n \"fake\",\n \" \",\n \"image\",\n \"\
\ \",\n \"#\",\n \"1\",\n \"'_\",\n \",_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"status\"\
,\n \"'_\",\n \":_\",\n \"'\",\n \"active\",\n \
\ \"'_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"'\",\n \"disk\",\n \"\\\\u\",\n \"format\",\n\
\ \"'_\",\n \":_\",\n \"'\",\n \"ami\",\n \"\
'_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \
\ \"'\",\n \"container\",\n \"\\\\u\",\n \"format\",\n\
\ \"'_\",\n \":_\",\n \"'\",\n \"ami\",\n \"\
'_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \
\ \"'\",\n \"is\",\n \"\\\\u\",\n \"public\",\n \
\ \"'_\",\n \":_\",\n \"False_\",\n \",_\",\n \"\\\
\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"created\",\n \
\ \"\\\\u\",\n \"at\",\n \"'_\",\n \":_\",\n \"\
datetime_\",\n \"._\",\n \"datetime_\",\n \"._\",\n \
\ \"utcnow_\",\n \"(_\",\n \")_\",\n \",_\",\n \"\
\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"update\",\n \
\ \"d\\\\u\",\n \"at\",\n \"'_\",\n \":_\",\n \
\ \"datetime_\",\n \"._\",\n \"datetime_\",\n \"._\",\n\
\ \"utcnow_\",\n \"(_\",\n \")_\",\n \",_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"delete\"\
,\n \"d\\\\u\",\n \"at\",\n \"'_\",\n \":_\",\n \
\ \"None_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\\
u_\",\n \"'\",\n \"delete\",\n \"d\",\n \"'_\",\n\
\ \":_\",\n \"False_\",\n \",_\",\n \"\\\\u\\\\u\\\
\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"checks\",\n \"um\"\
,\n \"'_\",\n \":_\",\n \"None_\",\n \",_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"min\"\
,\n \"\\\\u\",\n \"disk\",\n \"'_\",\n \":_\",\n \
\ \"0_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"'\",\n \"min\",\n \"\\\\u\",\n \"ram\",\n \
\ \"'_\",\n \":_\",\n \"0_\",\n \",_\",\n \"\\\
\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"size\",\n \
\ \"'_\",\n \":_\",\n \"13_\",\n \",_\",\n \"\\\\\
u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"location\",\n \
\ \"'_\",\n \":_\",\n \"\\\"\",\n \"swift\",\n \
\ \"://\",\n \"user\",\n \":\",\n \"passw\",\n \"\
d\",\n \"@\",\n \"acct\",\n \"/\",\n \"container\"\
,\n \"/\",\n \"obj\",\n \".\",\n \"tar\",\n \
\ \".0\",\n \"\\\"_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\
\\u\\\\u\\\\u_\",\n \"'\",\n \"proper\",\n \"ties\",\n \
\ \"'_\",\n \":_\",\n \"{_\",\n \"'\",\n \"type\"\
,\n \"'_\",\n \":_\",\n \"'\",\n \"kernel\",\n \
\ \"'_\",\n \"}_\",\n \"}_\",\n \",_\",\n \"\\\
\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"{_\",\n \"'\",\n \"\
id\",\n \"'_\",\n \":_\",\n \"UU\",\n \"ID\",\n \
\ \"2_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"'\",\n \"name\",\n \"'_\",\n \":_\",\n \
\ \"'\",\n \"fake\",\n \" \",\n \"image\",\n \"\
\ \",\n \"#\",\n \"2\",\n \"'_\",\n \",_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"status\"\
,\n \"'_\",\n \":_\",\n \"'\",\n \"active\",\n \
\ \"'_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"'\",\n \"disk\",\n \"\\\\u\",\n \"format\",\n\
\ \"'_\",\n \":_\",\n \"'\",\n \"vhd\",\n \"\
'_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \
\ \"'\",\n \"container\",\n \"\\\\u\",\n \"format\",\n\
\ \"'_\",\n \":_\",\n \"'\",\n \"ov\",\n \"\
f\",\n \"'_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\
\\u_\",\n \"'\",\n \"is\",\n \"\\\\u\",\n \"public\"\
,\n \"'_\",\n \":_\",\n \"True_\",\n \",_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"created\"\
,\n \"\\\\u\",\n \"at\",\n \"'_\",\n \":_\",\n \
\ \"datetime_\",\n \"._\",\n \"datetime_\",\n \"._\"\
,\n \"utcnow_\",\n \"(_\",\n \")_\",\n \",_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"update\"\
,\n \"d\\\\u\",\n \"at\",\n \"'_\",\n \":_\",\n \
\ \"datetime_\",\n \"._\",\n \"datetime_\",\n \"._\"\
,\n \"utcnow_\",\n \"(_\",\n \")_\",\n \",_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"delete\"\
,\n \"d\\\\u\",\n \"at\",\n \"'_\",\n \":_\",\n \
\ \"None_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\\
u_\",\n \"'\",\n \"delete\",\n \"d\",\n \"'_\",\n\
\ \":_\",\n \"False_\",\n \",_\",\n \"\\\\u\\\\u\\\
\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"checks\",\n \"um\"\
,\n \"'_\",\n \":_\",\n \"None_\",\n \",_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"min\"\
,\n \"\\\\u\",\n \"disk\",\n \"'_\",\n \":_\",\n \
\ \"5_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"'\",\n \"min\",\n \"\\\\u\",\n \"ram\",\n \
\ \"'_\",\n \":_\",\n \"256_\",\n \",_\",\n \"\
\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"size\",\n \
\ \"'_\",\n \":_\",\n \"19_\",\n \",_\",\n \"\\\
\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"'\",\n \"location\",\n \
\ \"'_\",\n \":_\",\n \"\\\"\",\n \"file\",\n \
\ \":///\",\n \"tmp\",\n \"/\",\n \"gla\",\n \"nce\"\
,\n \"-\",\n \"tests\",\n \"/\",\n \"2\",\n \
\ \"\\\"_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"'\",\n \"proper\",\n \"ties\",\n \"'_\",\n \
\ \":_\",\n \"{_\",\n \"}_\",\n \"}_\",\n \"\
]_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\\
u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n\
\ \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\\
uDEDENT\\\\u\\\\u\\\\u_\",\n \"[SEP]_\",\n \"module_\",\n \
\ \"\\\\u\\\\u\\\\uEOS\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\
\\u\\\\u_\",\n \"class_\",\n \"Test\",\n \"Regi\",\n \
\ \"stry\",\n \"Db_\",\n \"(_\",\n \"base_\",\n \
\ \"._\",\n \"Isolat\",\n \"ed\",\n \"Unit\",\n \"\
Test_\",\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\
\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\\
u\\\\u_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\\
u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\
\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n\
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\\
uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n\
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\
\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\\
u\\\\u_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\\
u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\
\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n\
\ \"[SEP]_\",\n \"class_\",\n \"Test\",\n \"Regi\"\
,\n \"stry\",\n \"Db_\",\n \"(_\",\n \"base_\",\n\
\ \"._\",\n \"Isolat\",\n \"ed\",\n \"Unit\",\n \
\ \"Test_\",\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\\
uEOS\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \
\ \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" _\",\n \"\
def_\",\n \"set\",\n \"Up_\",\n \"(_\",\n \"self_\"\
,\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\\
u\\\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" \
\ _\",\n \"\\\"\\\"\\\"\",\n \"Esta\",\n \"blis\",\n \
\ \"h\",\n \" \",\n \"a\",\n \" \",\n \"clean\"\
,\n \" \",\n \"test\",\n \" \",\n \"environ\",\n \
\ \"ment\",\n \"\\\"\\\"\\\"_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\
\\u\\\\u\\\\u_\",\n \"super_\",\n \"(_\",\n \"Test\",\n \
\ \"Regi\",\n \"stry\",\n \"Db_\",\n \",_\",\n \
\ \"self_\",\n \")_\",\n \"._\",\n \"set\",\n \"\
Up_\",\n \"(_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\\
u\\\\u\\\\u_\",\n \"conf_\",\n \"=_\",\n \"test\\\\u\",\n\
\ \"utils_\",\n \"._\",\n \"Test\",\n \"Config\",\n\
\ \"Opts_\",\n \"(_\",\n \"CONF_\",\n \")_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"self_\",\n \
\ \"._\",\n \"adm\",\n \"\\\\u\",\n \"context_\",\n \
\ \"=_\",\n \"rcon\",\n \"text_\",\n \"._\",\n \
\ \"Request\",\n \"Context_\",\n \"(_\",\n \"is\",\n \
\ \"\\\\u\",\n \"admin_\",\n \"=_\",\n \"True_\",\n \
\ \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \
\ \"self_\",\n \"._\",\n \"context_\",\n \"=_\",\n \
\ \"rcon\",\n \"text_\",\n \"._\",\n \"Request\",\n \
\ \"Context_\",\n \"(_\",\n \"is\",\n \"\\\\u\",\n \
\ \"admin_\",\n \"=_\",\n \"False_\",\n \")_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"db\",\n \"\
\\\\u\",\n \"api_\",\n \"._\",\n \"configur\",\n \"\
e\\\\u\",\n \"db_\",\n \"(_\",\n \"conf_\",\n \")_\"\
,\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"self_\",\n \
\ \"._\",\n \"destroy\",\n \"\\\\u\",\n \"fixtures_\"\
,\n \"(_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\\
u\\\\u_\",\n \"self_\",\n \"._\",\n \"create\",\n \
\ \"\\\\u\",\n \"fixtures_\",\n \"(_\",\n \")_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"[SEP]_\",\n \
\ \"class_\",\n \"Test\",\n \"Regi\",\n \"stry\",\n \
\ \"Db_\",\n \"(_\",\n \"base_\",\n \"._\",\n \"\
Isolat\",\n \"ed\",\n \"Unit\",\n \"Test_\",\n \"\
)_\",\n \":_\",\n \"\\\\u\\\\u\\\\uEOS\\\\u\\\\u\\\\u_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\
\\u\\\\u\\\\u_\",\n \"def_\",\n \"create\",\n \"\\\\u\",\n\
\ \"fixtures_\",\n \"(_\",\n \"self_\",\n \")_\",\n\
\ \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \
\ \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" _\",\n \
\ \"for_\",\n \"fixture_\",\n \"in_\",\n \"FIXTURE\",\n \
\ \"S_\",\n \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" \
\ _\",\n \"db\",\n \"\\\\u\",\n \"api_\",\n \"\
._\",\n \"image\",\n \"\\\\u\",\n \"create_\",\n \"\
(_\",\n \"self_\",\n \"._\",\n \"adm\",\n \"\\\\u\"\
,\n \"context_\",\n \",_\",\n \"fixture_\",\n \")_\"\
,\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"[SEP]_\",\n\
\ \"class_\",\n \"Test\",\n \"Regi\",\n \"stry\",\n\
\ \"Db_\",\n \"(_\",\n \"base_\",\n \"._\",\n \
\ \"Isolat\",\n \"ed\",\n \"Unit\",\n \"Test_\",\n \
\ \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\uEOS\\\\u\\\\u\\\\u_\"\
,\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \
\ \"def_\",\n \"destroy\",\n \"\\\\u\",\n \"fixtures_\"\
,\n \"(_\",\n \"self_\",\n \")_\",\n \":_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"#\",\n \"\
\ \",\n \"Eas\",\n \"ies\",\n \"t\",\n \" \",\n \
\ \"to\",\n \" \",\n \"just\",\n \" \",\n \"\
drop\",\n \" \",\n \"the\",\n \" \",\n \"model\",\n\
\ \"s\",\n \" \",\n \"and\",\n \" \",\n \"\
re\",\n \"-\",\n \"create\",\n \" \",\n \"them\",\n\
\ \"..._\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\
\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" _\",\n \"db\"\
,\n \"\\\\u\",\n \"models_\",\n \"._\",\n \"unregister\"\
,\n \"\\\\u\",\n \"models_\",\n \"(_\",\n \"db\",\n\
\ \"\\\\u\",\n \"api_\",\n \"._\",\n \"\\\\u\",\n\
\ \"ENGINE_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\
\\u\\\\u_\",\n \"db\",\n \"\\\\u\",\n \"models_\",\n \
\ \"._\",\n \"register\",\n \"\\\\u\",\n \"models_\",\n\
\ \"(_\",\n \"db\",\n \"\\\\u\",\n \"api_\",\n \
\ \"._\",\n \"\\\\u\",\n \"ENGINE_\",\n \")_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"[SEP]_\",\n \
\ \"class_\",\n \"Test\",\n \"Regi\",\n \"stry\",\n \
\ \"Db_\",\n \"(_\",\n \"base_\",\n \"._\",\n \
\ \"Isolat\",\n \"ed\",\n \"Unit\",\n \"Test_\",\n \
\ \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\uEOS\\\\u\\\\u\\\\u_\",\n\
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\
\\u\\\\u\\\\u_\",\n \"def_\",\n \"test\\\\u\",\n \"image\"\
,\n \"\\\\u\",\n \"get_\",\n \"(_\",\n \"self_\",\n\
\ \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" \
\ _\",\n \"image_\",\n \"=_\",\n \"db\",\n \"\\\\\
u\",\n \"api_\",\n \"._\",\n \"image\",\n \"\\\\u\"\
,\n \"get_\",\n \"(_\",\n \"self_\",\n \"._\",\n \
\ \"context_\",\n \",_\",\n \"UU\",\n \"ID\",\n \
\ \"1_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\\
u_\",\n \"self_\",\n \"._\",\n \"assert\",\n \"Equals_\"\
,\n \"(_\",\n \"image_\",\n \"[_\",\n \"'\",\n \
\ \"id\",\n \"'_\",\n \"]_\",\n \",_\",\n \"FIXTURE\"\
,\n \"S_\",\n \"[_\",\n \"0_\",\n \"]_\",\n \
\ \"[_\",\n \"'\",\n \"id\",\n \"'_\",\n \"]_\",\n\
\ \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \
\ \"[SEP]_\",\n \"class_\",\n \"Test\",\n \"Regi\",\n \
\ \"stry\",\n \"Db_\",\n \"(_\",\n \"base_\",\n \
\ \"._\",\n \"Isolat\",\n \"ed\",\n \"Unit\",\n \
\ \"Test_\",\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\uEOS\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\
\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \"def_\",\n \"test\\\\\
u\",\n \"image\",\n \"\\\\u\",\n \"get\",\n \"\\\\\
u\",\n \"disallow\",\n \"\\\\u\",\n \"deleted_\",\n \
\ \"(_\",\n \"self_\",\n \")_\",\n \":_\",\n \"\\\
\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\\u\\\
\\u\\\\u \",\n \" _\",\n \"db\",\n \"\\\\u\",\n \
\ \"api_\",\n \"._\",\n \"image\",\n \"\\\\u\",\n \
\ \"destroy_\",\n \"(_\",\n \"self_\",\n \"._\",\n \
\ \"adm\",\n \"\\\\u\",\n \"context_\",\n \",_\",\n\
\ \"UU\",\n \"ID\",\n \"1_\",\n \")_\",\n \"\
\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"self_\",\n \"._\"\
,\n \"assert\",\n \"Raises_\",\n \"(_\",\n \"exception_\"\
,\n \"._\",\n \"Not\",\n \"Found_\",\n \",_\",\n \
\ \"db\",\n \"\\\\u\",\n \"api_\",\n \"._\",\n \
\ \"image\",\n \"\\\\u\",\n \"get_\",\n \",_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"self_\",\n \"._\"\
,\n \"context_\",\n \",_\",\n \"UU\",\n \"ID\",\n\
\ \"1_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"[SEP]_\",\n \"class_\",\n \"Test\",\n \
\ \"Regi\",\n \"stry\",\n \"Db_\",\n \"(_\",\n \"\
base_\",\n \"._\",\n \"Isolat\",\n \"ed\",\n \"Unit\"\
,\n \"Test_\",\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\
\\uEOS\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \
\ \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \"def_\",\n \
\ \"test\\\\u\",\n \"image\",\n \"\\\\u\",\n \"get\",\n \
\ \"\\\\u\",\n \"allow\",\n \"\\\\u\",\n \"deleted_\"\
,\n \"(_\",\n \"self_\",\n \")_\",\n \":_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\
\\u\\\\u\\\\u \",\n \" _\",\n \"db\",\n \"\\\\u\",\n\
\ \"api_\",\n \"._\",\n \"image\",\n \"\\\\u\",\n\
\ \"destroy_\",\n \"(_\",\n \"self_\",\n \"._\",\n\
\ \"adm\",\n \"\\\\u\",\n \"context_\",\n \",_\",\n\
\ \"UU\",\n \"ID\",\n \"1_\",\n \")_\",\n \"\
\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"image_\",\n \"=_\"\
,\n \"db\",\n \"\\\\u\",\n \"api_\",\n \"._\",\n \
\ \"image\",\n \"\\\\u\",\n \"get_\",\n \"(_\",\n \
\ \"self_\",\n \"._\",\n \"adm\",\n \"\\\\u\",\n \
\ \"context_\",\n \",_\",\n \"UU\",\n \"ID\",\n \
\ \"1_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\\
u_\",\n \"self_\",\n \"._\",\n \"assert\",\n \"Equals_\"\
,\n \"(_\",\n \"image_\",\n \"[_\",\n \"'\",\n \
\ \"id\",\n \"'_\",\n \"]_\",\n \",_\",\n \"FIXTURE\"\
,\n \"S_\",\n \"[_\",\n \"0_\",\n \"]_\",\n \
\ \"[_\",\n \"'\",\n \"id\",\n \"'_\",\n \"]_\",\n\
\ \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \
\ \"[SEP]_\",\n \"class_\",\n \"Test\",\n \"Regi\",\n \
\ \"stry\",\n \"Db_\",\n \"(_\",\n \"base_\",\n \
\ \"._\",\n \"Isolat\",\n \"ed\",\n \"Unit\",\n \
\ \"Test_\",\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\uEOS\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\
\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \"def_\",\n \"test\\\\\
u\",\n \"image\",\n \"\\\\u\",\n \"get\",\n \"\\\\\
u\",\n \"force\",\n \"\\\\u\",\n \"allow\",\n \"\\\
\\u\",\n \"deleted_\",\n \"(_\",\n \"self_\",\n \"\
)_\",\n \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n\
\ \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" _\",\n \
\ \"db\",\n \"\\\\u\",\n \"api_\",\n \"._\",\n \
\ \"image\",\n \"\\\\u\",\n \"destroy_\",\n \"(_\",\n\
\ \"self_\",\n \"._\",\n \"adm\",\n \"\\\\u\",\n \
\ \"context_\",\n \",_\",\n \"UU\",\n \"ID\",\n \
\ \"1_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\\
u_\",\n \"image_\",\n \"=_\",\n \"db\",\n \"\\\\u\"\
,\n \"api_\",\n \"._\",\n \"image\",\n \"\\\\u\",\n\
\ \"get_\",\n \"(_\",\n \"self_\",\n \"._\",\n \
\ \"context_\",\n \",_\",\n \"UU\",\n \"ID\",\n \
\ \"1_\",\n \",_\",\n \"force\",\n \"\\\\u\",\n \
\ \"show\",\n \"\\\\u\",\n \"deleted_\",\n \"=_\",\n \
\ \"True_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"self_\",\n \"._\",\n \"assert\",\n \"\
Equals_\",\n \"(_\",\n \"image_\",\n \"[_\",\n \"\
'\",\n \"id\",\n \"'_\",\n \"]_\",\n \",_\",\n \
\ \"FIXTURE\",\n \"S_\",\n \"[_\",\n \"0_\",\n \
\ \"]_\",\n \"[_\",\n \"'\",\n \"id\",\n \"'_\",\n\
\ \"]_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"[SEP]_\",\n \"class_\",\n \"Test\",\n \
\ \"Regi\",\n \"stry\",\n \"Db_\",\n \"(_\",\n \"\
base_\",\n \"._\",\n \"Isolat\",\n \"ed\",\n \"Unit\"\
,\n \"Test_\",\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\
\\uEOS\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \
\ \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \"def_\",\n \
\ \"test\\\\u\",\n \"image\",\n \"\\\\u\",\n \"get\",\n \
\ \"\\\\u\",\n \"all_\",\n \"(_\",\n \"self_\",\n \
\ \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" \
\ _\",\n \"images_\",\n \"=_\",\n \"db\",\n \"\\\\\
u\",\n \"api_\",\n \"._\",\n \"image\",\n \"\\\\u\"\
,\n \"get\",\n \"\\\\u\",\n \"all_\",\n \"(_\",\n\
\ \"self_\",\n \"._\",\n \"context_\",\n \")_\",\n\
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"self_\",\n \
\ \"._\",\n \"assert\",\n \"Equals_\",\n \"(_\",\n \
\ \"len_\",\n \"(_\",\n \"images_\",\n \")_\",\n \
\ \",_\",\n \"2_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\
\\u\\\\u\\\\u_\",\n \"[SEP]_\",\n \"class_\",\n \"Test\"\
,\n \"Regi\",\n \"stry\",\n \"Db_\",\n \"(_\",\n \
\ \"base_\",\n \"._\",\n \"Isolat\",\n \"ed\",\n \
\ \"Unit\",\n \"Test_\",\n \")_\",\n \":_\",\n \
\ \"\\\\u\\\\u\\\\uEOS\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\
\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \"def_\"\
,\n \"test\\\\u\",\n \"image\",\n \"\\\\u\",\n \"\
get\",\n \"\\\\u\",\n \"all\",\n \"\\\\u\",\n \"marker_\"\
,\n \"(_\",\n \"self_\",\n \")_\",\n \":_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\
\\u\\\\u\\\\u \",\n \" _\",\n \"images_\",\n \"=_\"\
,\n \"db\",\n \"\\\\u\",\n \"api_\",\n \"._\",\n \
\ \"image\",\n \"\\\\u\",\n \"get\",\n \"\\\\u\",\n\
\ \"all_\",\n \"(_\",\n \"self_\",\n \"._\",\n \
\ \"context_\",\n \",_\",\n \"marker_\",\n \"=_\",\n\
\ \"UU\",\n \"ID\",\n \"2_\",\n \")_\",\n \"\
\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"self_\",\n \"._\"\
,\n \"assert\",\n \"Equals_\",\n \"(_\",\n \"len_\"\
,\n \"(_\",\n \"images_\",\n \")_\",\n \",_\",\n \
\ \"1_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"[SEP]_\",\n \"class_\",\n \"Test\",\n \
\ \"Regi\",\n \"stry\",\n \"Db_\",\n \"(_\",\n \"\
base_\",\n \"._\",\n \"Isolat\",\n \"ed\",\n \"Unit\"\
,\n \"Test_\",\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\
\\uEOS\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \
\ \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \"def_\",\n \
\ \"test\\\\u\",\n \"image\",\n \"\\\\u\",\n \"get\",\n \
\ \"\\\\u\",\n \"all\",\n \"\\\\u\",\n \"marker\",\n\
\ \"\\\\u\",\n \"deleted_\",\n \"(_\",\n \"self_\"\
,\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\\
u\\\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" \
\ _\",\n \"\\\"\\\"\\\"\",\n \"Cann\",\n \"ot\",\n \
\ \" \",\n \"speci\",\n \"fy\",\n \" \",\n \"\
a\",\n \" \",\n \"delete\",\n \"d\",\n \" \",\n \
\ \"image\",\n \" \",\n \"as\",\n \" \",\n \"\
a\",\n \" \",\n \"marker\",\n \".\\\"\\\"\\\"_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"db\",\n \"\\\
\\u\",\n \"api_\",\n \"._\",\n \"image\",\n \"\\\\\
u\",\n \"destroy_\",\n \"(_\",\n \"self_\",\n \"._\"\
,\n \"adm\",\n \"\\\\u\",\n \"context_\",\n \",_\"\
,\n \"UU\",\n \"ID\",\n \"1_\",\n \")_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"filters_\",\n \
\ \"=_\",\n \"{_\",\n \"'\",\n \"delete\",\n \"d\"\
,\n \"'_\",\n \":_\",\n \"False_\",\n \"}_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"self_\",\n \
\ \"._\",\n \"assert\",\n \"Raises_\",\n \"(_\",\n \
\ \"exception_\",\n \"._\",\n \"Not\",\n \"Found_\",\n\
\ \",_\",\n \"db\",\n \"\\\\u\",\n \"api_\",\n \
\ \"._\",\n \"image\",\n \"\\\\u\",\n \"get\",\n \
\ \"\\\\u\",\n \"all_\",\n \",_\",\n \"\\\\u\\\\u\\\\\
uNL\\\\u\\\\u\\\\u_\",\n \"self_\",\n \"._\",\n \"context_\"\
,\n \",_\",\n \"marker_\",\n \"=_\",\n \"UU\",\n \
\ \"ID\",\n \"1_\",\n \",_\",\n \"filters_\",\n \
\ \"=_\",\n \"filters_\",\n \")_\",\n \"\\\\u\\\\u\\\\\
uNEWLINE\\\\u\\\\u\\\\u_\",\n \"[SEP]_\",\n \"class_\",\n \
\ \"Test\",\n \"Regi\",\n \"stry\",\n \"Db_\",\n \
\ \"(_\",\n \"base_\",\n \"._\",\n \"Isolat\",\n \"\
ed\",\n \"Unit\",\n \"Test_\",\n \")_\",\n \":_\"\
,\n \"\\\\u\\\\u\\\\uEOS\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \
\ \"def_\",\n \"test\\\\u\",\n \"image\",\n \"\\\\u\",\n\
\ \"get\",\n \"\\\\u\",\n \"all\",\n \"\\\\u\",\n\
\ \"marker\",\n \"\\\\u\",\n \"delete\",\n \"d\\\\\
u\",\n \"showin\",\n \"g\",\n \"\\\\u\",\n \"delete\"\
,\n \"d\\\\u\",\n \"as\",\n \"\\\\u\",\n \"admin_\"\
,\n \"(_\",\n \"self_\",\n \")_\",\n \":_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\
\\u\\\\u\\\\u \",\n \" _\",\n \"\\\"\\\"\\\"\",\n \"\
Speci\",\n \"fy\",\n \" \",\n \"a\",\n \" \",\n \
\ \"delete\",\n \"d\",\n \" \",\n \"image\",\n \
\ \" \",\n \"as\",\n \" \",\n \"a\",\n \" \",\n\
\ \"marker\",\n \" \",\n \"if\",\n \" \",\n \
\ \"showin\",\n \"g\",\n \" \",\n \"delete\",\n \"\
d\",\n \" \",\n \"images\",\n \".\\\"\\\"\\\"_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"db\",\n \"\\\
\\u\",\n \"api_\",\n \"._\",\n \"image\",\n \"\\\\\
u\",\n \"destroy_\",\n \"(_\",\n \"self_\",\n \"._\"\
,\n \"adm\",\n \"\\\\u\",\n \"context_\",\n \",_\"\
,\n \"UU\",\n \"ID\",\n \"1_\",\n \")_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"images_\",\n \
\ \"=_\",\n \"db\",\n \"\\\\u\",\n \"api_\",\n \"\
._\",\n \"image\",\n \"\\\\u\",\n \"get\",\n \"\\\\\
u\",\n \"all_\",\n \"(_\",\n \"self_\",\n \"._\",\n\
\ \"adm\",\n \"\\\\u\",\n \"context_\",\n \",_\",\n\
\ \"marker_\",\n \"=_\",\n \"UU\",\n \"ID\",\n \
\ \"1_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\\
u_\",\n \"self_\",\n \"._\",\n \"assert\",\n \"Equals_\"\
,\n \"(_\",\n \"len_\",\n \"(_\",\n \"images_\",\n\
\ \")_\",\n \",_\",\n \"0_\",\n \")_\",\n \"\
\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"[SEP]_\",\n \"class_\"\
,\n \"Test\",\n \"Regi\",\n \"stry\",\n \"Db_\",\n\
\ \"(_\",\n \"base_\",\n \"._\",\n \"Isolat\",\n \
\ \"ed\",\n \"Unit\",\n \"Test_\",\n \")_\",\n \
\ \":_\",\n \"\\\\u\\\\u\\\\uEOS\\\\u\\\\u\\\\u_\",\n \"\\\\\
u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\\
u_\",\n \"def_\",\n \"test\\\\u\",\n \"image\",\n \
\ \"\\\\u\",\n \"get\",\n \"\\\\u\",\n \"all\",\n \
\ \"\\\\u\",\n \"marker\",\n \"\\\\u\",\n \"delete\",\n \
\ \"d\\\\u\",\n \"showin\",\n \"g\",\n \"\\\\u\",\n\
\ \"deleted_\",\n \"(_\",\n \"self_\",\n \")_\",\n\
\ \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \
\ \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \" _\",\n \
\ \"\\\"\\\"\\\"\",\n \"Speci\",\n \"fy\",\n \" \",\n \
\ \"a\",\n \" \",\n \"delete\",\n \"d\",\n \"\
\ \",\n \"image\",\n \" \",\n \"as\",\n \" \",\n \
\ \"a\",\n \" \",\n \"marker\",\n \" \",\n \"\
if\",\n \" \",\n \"showin\",\n \"g\",\n \" \",\n \
\ \"delete\",\n \"d\",\n \" \",\n \"images\",\n \
\ \".\\\"\\\"\\\"_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\"\
,\n \"db\",\n \"\\\\u\",\n \"api_\",\n \"._\",\n \
\ \"image\",\n \"\\\\u\",\n \"destroy_\",\n \"(_\"\
,\n \"self_\",\n \"._\",\n \"adm\",\n \"\\\\u\",\n\
\ \"context_\",\n \",_\",\n \"UU\",\n \"ID\",\n \
\ \"1_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\\
u_\",\n \"filters_\",\n \"=_\",\n \"{_\",\n \"'\"\
,\n \"delete\",\n \"d\",\n \"'_\",\n \":_\",\n \
\ \"True_\",\n \"}_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\
\\u_\",\n \"images_\",\n \"=_\",\n \"db\",\n \"\\\\\
u\",\n \"api_\",\n \"._\",\n \"image\",\n \"\\\\u\"\
,\n \"get\",\n \"\\\\u\",\n \"all_\",\n \"(_\",\n\
\ \"self_\",\n \"._\",\n \"context_\",\n \",_\",\n\
\ \"marker_\",\n \"=_\",\n \"UU\",\n \"ID\",\n \
\ \"1_\",\n \",_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"filters_\",\n \"=_\",\n \"filters_\",\n \")_\"\
,\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"self_\",\n \
\ \"._\",\n \"assert\",\n \"Equals_\",\n \"(_\",\n\
\ \"len_\",\n \"(_\",\n \"images_\",\n \")_\",\n \
\ \",_\",\n \"0_\",\n \")_\"\n ],\n \"label_sequence\"\
: [\n 4,\n 4,\n 4,\n 4,\n 4,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 0,\n 1,\n 1,\n \
\ 1,\n 1,\n 1,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 4,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 4,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 4,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 4,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 4,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 4,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 4,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 4,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 4,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 4,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 4,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 4,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 4,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2\n\
\ ]\n },\n \"truncated_cells\": []\n },\n {\n \"row_idx\": 1,\n\
\ \"row\": {\n \"query_name\": \"Unused local variable\",\n \"code_file_path\"\
: \"n9code/pylease/tests/test_ctxmgmt.py\",\n \"context_blocks\": [\n \
\ {\n \"content\": \" def test_caution_context_manager_must_rollback_everything_if_error_occurs(self):\\\
n rb1 = MagicMock()\\n rb2 = MagicMock()\\n rb3 = MagicMock()\\\
n\\n with Caution() as caution:\\n caution.add_rollback(rb1)\\\
n caution.add_rollback(rb2)\\n\\n raise Exception()\\n\\\
n rb1.assert_called_once_with()\\n rb2.assert_called_once_with()\\\
n ok_(not rb3.called)\",\n \"metadata\": \"root.ContextManagersTest.test_caution_context_manager_must_rollback_everything_if_error_occurs\"\
,\n \"header\": \"['class', 'ContextManagersTest', '(', 'PyleaseTest',\
\ ')', ':', '___EOS___']\",\n \"index\": 30\n }\n ],\n \
\ \"answer_spans\": [\n {\n \"span\": \"rb3 \",\n \
\ \"start_line\": 33,\n \"start_column\": 8,\n \"end_line\"\
: 33,\n \"end_column\": 11\n }\n ],\n \"supporting_fact_spans\"\
: [],\n \"example_type\": 1,\n \"single_hop\": true,\n \"subtokenized_input_sequence\"\
: [\n \"[CLS]_\",\n \"Un\",\n \"used_\",\n \"local_\"\
,\n \"variable_\",\n \"[SEP]_\",\n \"class_\",\n \"\
Context\",\n \"Manager\",\n \"s\",\n \"Test_\",\n \
\ \"(_\",\n \"Py\",\n \"lease\",\n \"Test_\",\n \"\
)_\",\n \":_\",\n \"\\\\u\\\\u\\\\uEOS\\\\u\\\\u\\\\u_\",\n \
\ \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n \
\ \"def_\",\n \"test\\\\u\",\n \"cau\",\n \"tion\",\n \
\ \"\\\\u\",\n \"context\",\n \"\\\\u\",\n \"manage\"\
,\n \"r\",\n \"\\\\u\",\n \"must\",\n \"\\\\u\",\n\
\ \"rollback\",\n \"\\\\u\",\n \"every\",\n \"thing\"\
,\n \"\\\\u\",\n \"if\",\n \"\\\\u\",\n \"error\"\
,\n \"\\\\u\",\n \"occur\",\n \"s_\",\n \"(_\",\n\
\ \"self_\",\n \")_\",\n \":_\",\n \"\\\\u\\\\u\\\\\
uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \"\
,\n \" _\",\n \"rb\",\n \"1_\",\n \"=_\",\n\
\ \"Mag\",\n \"ic\",\n \"Mock_\",\n \"(_\",\n \
\ \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\
rb\",\n \"2_\",\n \"=_\",\n \"Mag\",\n \"ic\",\n \
\ \"Mock_\",\n \"(_\",\n \")_\",\n \"\\\\u\\\\u\\\\\
uNEWLINE\\\\u\\\\u\\\\u_\",\n \"rb\",\n \"3_\",\n \"=_\"\
,\n \"Mag\",\n \"ic\",\n \"Mock_\",\n \"(_\",\n \
\ \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\
\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\",\n \"with_\",\n \"Cau\",\n \
\ \"tion_\",\n \"(_\",\n \")_\",\n \"as_\",\n \
\ \"cau\",\n \"tion_\",\n \":_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\
\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uINDENT\\\\u\\\\u\\\\u \",\n \
\ \" _\",\n \"cau\",\n \"tion_\",\n \"._\",\n\
\ \"add\",\n \"\\\\u\",\n \"rollback_\",\n \"(_\"\
,\n \"rb\",\n \"1_\",\n \")_\",\n \"\\\\u\\\\u\\\\\
uNEWLINE\\\\u\\\\u\\\\u_\",\n \"cau\",\n \"tion_\",\n \"\
._\",\n \"add\",\n \"\\\\u\",\n \"rollback_\",\n \"\
(_\",\n \"rb\",\n \"2_\",\n \")_\",\n \"\\\\u\\\\\
u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uNL\\\\u\\\\u\\\\u_\"\
,\n \"raise_\",\n \"Exception_\",\n \"(_\",\n \")_\"\
,\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\
\\uNL\\\\u\\\\u\\\\u_\",\n \"\\\\u\\\\u\\\\uDEDENT\\\\u\\\\u\\\\u_\",\n\
\ \"rb\",\n \"1_\",\n \"._\",\n \"assert\",\n \
\ \"\\\\u\",\n \"call\",\n \"ed\",\n \"\\\\u\",\n \
\ \"onc\",\n \"e\\\\u\",\n \"with_\",\n \"(_\",\n \
\ \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"\
rb\",\n \"2_\",\n \"._\",\n \"assert\",\n \"\\\\u\"\
,\n \"call\",\n \"ed\",\n \"\\\\u\",\n \"onc\",\n\
\ \"e\\\\u\",\n \"with_\",\n \"(_\",\n \")_\",\n \
\ \"\\\\u\\\\u\\\\uNEWLINE\\\\u\\\\u\\\\u_\",\n \"ok\\\\u_\",\n \
\ \"(_\",\n \"not_\",\n \"rb\",\n \"3_\",\n \
\ \"._\",\n \"called_\",\n \")_\",\n \"\\\\u\\\\u\\\\uNEWLINE\\\
\\u\\\\u\\\\u_\"\n ],\n \"label_sequence\": [\n 4,\n 4,\n\
\ 4,\n 4,\n 4,\n 4,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 0,\n 1,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n\
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n \
\ 2,\n 2,\n 2,\n 2,\n 2,\n 2,\n 2\n\
\ ]\n },\n \"truncated_cells\": []\n }\n]"
- source_sentence: conversational ai podcast transcripts
sentences:
- "DATASET_NAME: \"64bits/lex_fridman_podcast_for_llm_vicuna\"\nFEATURES: {'conversations':\
\ [{'from': {'dtype': 'string', '_type': 'Value'}, 'value': {'dtype': 'string',\
\ '_type': 'Value'}}], 'id': {'dtype': 'string', '_type': 'Value'}}\nDATA SAMPLE:\n\
[\n {\n \"row_idx\": 0,\n \"row\": {\n \"conversations\": [\n \
\ {\n \"from\": \"human\",\n \"value\": \"But the point is\
\ they're all beautiful in their own little way and on all of them, especially\
\ if BN works. It's wonderful and it works fast. Simple thing, done right. Go\
\ to expressvpn.com slash lexpod for an extra three months free. This is the Lex\
\ Friedman podcast. Disappointed, please check out our sponsors in the description.\
\ And now, dear friends, here's Bert Kreischer. I read somewhere that you like\
\ Ernest Hemingway. Love him. There's a quote, there's nothing too writing. All\
\ you do is sit down at a typewriter and bleed. Do you agree with that?\"\n \
\ },\n {\n \"from\": \"gpt\",\n \"value\": \"I agree.\
\ I agree. Well, I agree with that with journaling. I can't write stand up. I\
\ can't write a bit, but if I journal, I'll find something that I go just write.\
\ kind of writing where you're the pens moving faster than your brain and they're\
\ kind of like doing it together. The thing I liked about Ernest Hemingway, this\
\ is so stupid. I'm a little dyslexic and so I'm not a good reader. And so he\
\ wrote very simply and it wasn't until after I read a bunch of Ernest Hemingway,\
\ I was working in parts of Noble and this person said, don't you love his titles?\
\ And I was like, yeah, Sun Also Rises. And they're like, yeah, but And the sun\
\ also rises. And I was like, yeah, sun also rises. And they're like, no, no,\
\ the sun also rises. And I went, oh, oh, yeah, that is fucking badass. Yeah.\
\ And then I'm like always late to the party with anything. And then that night\
\ we were all doing coke. And it was like five in the morning and we ended up\
\ on the roof of my apartment in Greenwich Village. And it's Like I'm a sunset\
\ guy, I grew up in Florida, but sunrises in New York are electric blue, like\
\ electric blue. And I was like, the sun also rises.\"\n },\n {\n\
\ \"from\": \"human\",\n \"value\": \"Yeah, he was so good with\
\ just a handful of words, the simplicity, the choice of words. Like basically\
\ his process is carving down.\"\n },\n {\n \"from\": \"\
gpt\",\n \"value\": \"Right, drunk, edit sober.\"\n },\n \
\ {\n \"from\": \"human\",\n \"value\": \"Right, drunk, edit\
\ sober.\"\n },\n {\n \"from\": \"gpt\",\n \"\
value\": \"Yeah, but he was good at like, A small amount of words to get a point\
\ across. I'm the exact fucking opposite.\"\n },\n {\n \
\ \"from\": \"human\",\n \"value\": \"Well, he's also said the first\
\ draft of anything is always shit. Is that is that true for you?\"\n },\n\
\ {\n \"from\": \"gpt\",\n \"value\": \"I think so. It's\
\ the best one I have.\"\n },\n {\n \"from\": \"human\"\
,\n \"value\": \"Because Donald Hill from there. So for you, editing\
\ is destroying the original.\"\n },\n {\n \"from\": \"\
gpt\",\n \"value\": \"The first time I tell it, it's perfect. It's perfect\
\ it just it works and then i go great and then i take it and i try to fix it\
\ and make it better and jazzy i just joking razzle dazzle about my daughters\
\ and i drive and pass the deaf child sign. And i just told it on stage it just\
\ happened and louis was in the audience. He was, I love that joke. I said, really?\
\ He was, yeah, and I just told it and I went, done. Louis C. K. says he likes\
\ a joke. It's fucking, it's in the special. And I tried to fucking razzle, dazzle\
\ that joke into like a 15 minute bit. It got to the point where it was so, it\
\ was so bad that I told him, I did a podcast with him. I said, what did you like\
\ about that joke? Cause I can't get it to work anymore. Because it was just simple.\
\ It was simple that your daughter didn't know how they'd figure out where deaf\
\ kids lived. I went, that's it? He was like, yeah, I was like, nothing about\
\ gun control. And he was like, huh? I was like, oh, fuck, man. I really fucked\
\ that joke up. And then I had to, I had to go back to the beginning of it, but\
\ that first time that's when, that's when the joke comes out, purest for me.\
\ And then it's always chasing back to get that, get that first telling. Some\
\ jokes are different. Like the machine obviously was a much longer story and\
\ like, and like escape room for this one. The longer stories take a little work,\
\ but the, the simple ones, like, like the deaf child and pajamas and those first\
\ times are perfect.\"\n },\n {\n \"from\": \"human\",\n\
\ \"value\": \"What do you think is is it about the machine story that\
\ works that was so captivating for people?\"\n },\n {\n \
\ \"from\": \"gpt\",\n \"value\": \"I think it's a I think I don't\
\ know. I really wish I don't know. I think it's a good story. It's a good story.\
\ I think everyone has something similar that happened in their life where they\
\ had a crazy night and that no one believed and they told their friends and their\
\ friend or they experienced that with a friend like that and And I think, I think\
\ that's gotta be it. I really don't know. I think there's a part of, I think\
\ there's a part of the, like our community of comedy fans that were that early\
\ Desquad that was fun to watch someone take something from the podcast and turn\
\ it into a bit on stage. I think there was a lot of that, but I think people\
\ identify with a wild, crazy night that got out of control that they've probably\
\ had too. I think I don't fucking know.\"\n },\n {\n \"\
from\": \"human\",\n \"value\": \"Do you think Putin knows who you are\
\ because of that story?\"\n },\n {\n \"from\": \"gpt\"\
,\n \"value\": \"I hope so. I fucking hope so. I hope so. But there's\
\ a part of me that you gotta understand is like, I'm not... I say stuff like\
\ that. I'm sure there's someone hears that and they go, who is this fucking narcissist?\
\ You're like, no, that's not it. I'm just telling you the truth. Like I understand\
\ what I'm supposed to say. Well, you know, I don't know. I don't know. That's\
\ hard to say, but I don't give a shit about that guy. I'm not that guy. I'm telling\
\ you everything that comes out of my mouth. The second it comes out, I'm going\
\ to be 100% honest. I don't know any other way to live. I kind of hope so. That\
\ would be fucking cool. And I look, I know he's a bad, I guess he's a bad dude.\
\ I don't follow politics done, but it'd be cool if like one day someone's like,\
\ uh, he's, he's on his computer and some guy goes, and he's like, and he's like,\
\ and he's like, and he shows it to him. And, and then he's like, I'm fucking\
\ this guy, huh?\"\n }\n ],\n \"id\": \"382-bert-kreischer-comedy-drinking-rogan-segura-churchill-0\"\
\n },\n \"truncated_cells\": []\n },\n {\n \"row_idx\": 1,\n \"\
row\": {\n \"conversations\": [\n {\n \"from\": \"human\"\
,\n \"value\": \"It's not out of the realm of possibility that Putin\
\ has watched it. And this podcast, this has kind of made me realize how small\
\ the world is.\"\n },\n {\n \"from\": \"gpt\",\n \
\ \"value\": \"I've gotten to meet some incredibly, like, interesting people.\
\ Like that's, I mean, that's the thing where I go. It's gotten really crazy.\
\ The internet's really insane and Netflix is really insane and the reach of like\
\ people that like this is really simple, but like I'm a big fan of the show Drink\
\ Champs. I didn't think in a million years Nori would ever know who I was. And\
\ then he offered me to come to his podcast and then when I got there, he told\
\ me he was a fan of mine and I went like, I remember moving to New York, listening\
\ to you and Capone like I'm sitting there going, wow. Again and then and the\
\ phone calls I've gotten since the movie came out and since razzle dazzle came\
\ out have been really insane when you realize. As much as I know about people\
\ and I like there's people I'm fans of that don't know who I am that like that\
\ are like, they're like some pitcher in fucking Florida that I follow the kids.\
\ I like I just fun to watch and pitch. There was a softball player in Tempe that\
\ she's do catching drills and I was really into softball with my daughter and\
\ we watch her catching girls. There's a girl deglazer who me and my daughters\
\ will watch her dance videos and then I hit her up. She's like, you know who\
\ I am. And you're like, oh, yeah, that's the way the world works.\"\n \
\ },\n {\n \"from\": \"human\",\n \"value\": \"What world\
\ leader would you drink with if you could see much in Putin? So Putin doesn't\
\ drink much. Yeah, Zelensky drinks more. Yeah, also not much.\"\n },\n\
\ {\n \"from\": \"gpt\",\n \"value\": \"Of course. He\
\ also would have been a good man. He would have been a fucking good one. I mean\
\ like I go Hitler was a pussy like he drank wine with it with a cut with a spoonful\
\ of sugar in it. You don't drink. What's Churchill? That's my hole and that's\
\ my guy. Yeah, you know, that's my guy, right? No, I don't know that's your guy.\"\
\n },\n {\n \"from\": \"human\",\n \"value\":\
\ \"So he, uh, he did champagne.\"\n },\n {\n \"from\"\
: \"gpt\",\n \"value\": \"Well, you don't have to tell me a thing about\
\ Winston Churchill. He, I found out about his daily routine. Yeah. So his daily\
\ routine was a big breakfast, cigar, coffee, eggs over eggs, sunny side up, toast,\
\ bacon, sausage, tomato, beans, uh, fruit, orange juice, soft scotch, cigar in\
\ bed. And he'd sit there for three hours. He'd hold morning breakfast and that's\
\ they'd come in and he'd run through his day and then he'd get in the bathtub\
\ for fucking an hour and drink champagne And then he'd go to lunch and have some\
\ goose or whatever the fuck day back then with more champagne and then And then\
\ he'd take a nap, and like, I love that. So every year on the day of his death,\
\ and he's January 24th, I celebrate Winston Churchill's life by living the way\
\ he did. And so I have breakfast in bed with a soft scotch, and I just party\
\ all fucking day. That would be the guy to drink with. That would be the guy\
\ if you could pick anyone to drink with.\"\n },\n {\n \
\ \"from\": \"human\",\n \"value\": \"He drank every day throughout the\
\ day.\"\n },\n {\n \"from\": \"gpt\",\n \"value\"\
: \"He lived like 94.\"\n },\n {\n \"from\": \"human\"\
,\n \"value\": \"And was one of the most heroic figures in human history.\
\ Never, never surrender.\"\n },\n {\n \"from\": \"gpt\"\
,\n \"value\": \"I mean, what a fucking badass thing. You know, he taught,\
\ he taught the queen. The queen was like, uh, like, uh, like a window licking\
\ idiot, right? Like she was, she had a third grade education. So like her dad\
\ dies and she's like 10 or 12 or whatever. The first thing they do is they pull\
\ her out of school. They're like, you've done learning. You're just going to\
\ be the queen. You gotta know, this is what you need to learn. Don't make, let\
\ people make eye contact with you. Put your hand out like this. Don't let people\
\ touch you. So she like, she had like a really, I saw the, by the way, I didn't\
\ do this research. I just watched the crown, okay? And Winston Churchill kind\
\ of taught her about world politics and was like, they like coming and she'd\
\ be like, Winston what did we do in Africa? He's like this is gonna be a tough\
\ one and so yeah, so but Winston Churchill's the badass But if I had a drink\
\ with someone living like drink there's like there's a too obvious one like obviously\
\ it's fucking Kim Jong-un right you kind of want to he drinks no he does You\
\ think my IVs are good in the morning, you know Kim Jong-un's IVs are next you're\
\ like I feel up and he's like it's infatamine He would be a badass to drink with.\"\
\n },\n {\n \"from\": \"human\",\n \"value\":\
\ \"I think he drinks hard liquor.\"\n },\n {\n \"from\"\
: \"gpt\",\n \"value\": \"Uh, I think Scott Johnny Walker blue. That's\
\ what I've heard.\"\n },\n {\n \"from\": \"human\",\n\
\ \"value\": \"I don't know where you heard this, but I like this.\"\n\
\ },\n {\n \"from\": \"gpt\",\n \"value\": \"\
Okay. So I say things like that, but then I'm sure I, I, I, I don't know if you\
\ ever party with someone who just parties a little different than you. And then\
\ you notice like, like sometimes celebrity, like athletes and then like you're\
\ at a strip club and then they, they, they talk weird to a woman. You're like,\
\ Oh, that's not me. I'm so sorry. So that'd be the thing. The atrocities would\
\ be hard to get past with them.\"\n },\n {\n \"from\"\
: \"human\",\n \"value\": \"Not just the atrocity, but the way of being.\
\ I agree with you. I actually don't like drinking with rich and powerful people\
\ often. There's something weird about them. Yeah. Not all of them.\"\n \
\ },\n {\n \"from\": \"gpt\",\n \"value\": \"And I don't\
\ like, I don't like a drinking meeting. Like when someone goes like, like people\
\ go like, Hey, uh, I'm taking you out tonight. We're drinking. Yeah. And then\
\ I go, Oh, that's not the way I like to do it. It gives me anxiety. Like, um,\
\ our lives are very different.\"\n }\n ],\n \"id\": \"382-bert-kreischer-comedy-drinking-rogan-segura-churchill-1\"\
\n },\n \"truncated_cells\": []\n }\n]"
- "DATASET_NAME: \"iahlt/he-universal_morphologies\"\nFEATURES: {'lemma': {'dtype':\
\ 'string', '_type': 'Value'}, 'forms': {'feature': {'word': {'dtype': 'string',\
\ '_type': 'Value'}, 'Aktionsart': {'feature': {'names': ['STAT', 'DYN', 'TEL',\
\ 'ATEL', 'PCT', 'DUR', 'ACH', 'ACCMP', 'SEMEL', 'ACTY'], '_type': 'ClassLabel'},\
\ '_type': 'Sequence'}, 'Animacy': {'feature': {'names': ['ANIM', 'INAN', 'HUM',\
\ 'NHUM'], '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Argument_Marking': {'feature':\
\ {'names': ['ARGNO1S', 'ARGNO2S', 'ARGNO3S', 'ARGNO1P', 'ARGNO2P', 'ARGNO3P',\
\ 'ARGAC1S', 'ARGAC2S', 'ARGAC3S', 'ARGAC1P', 'ARGAC2P', 'ARGAC3P', 'ARGAB1S',\
\ 'ARGAB2S', 'ARGAB3S', 'ARGAB1P', 'ARGAB2P', 'ARGAB3P', 'ARGER1S', 'ARGER2S',\
\ 'ARGER3S', 'ARGER1P', 'ARGER2P', 'ARGER3P', 'ARGDA1S', 'ARGDA2S', 'ARGDA3S',\
\ 'ARGDA1P', 'ARGDA2P', 'ARGDA3P', 'ARGBE1S', 'ARGBE2S', 'ARGBE3S', 'ARGBE1P',\
\ 'ARGBE2P', 'ARGBE3P'], '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Aspect':\
\ {'feature': {'names': ['IPFV', 'PFV', 'PRF', 'PROG', 'PROSP', 'ITER', 'HAB'],\
\ '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Case': {'feature': {'names':\
\ ['NOM', 'ACC', 'ERG', 'ABS', 'NOMS', 'DAT', 'BEN', 'PRP', 'GEN', 'REL', 'PRT',\
\ 'INS', 'COM', 'VOC', 'COMPV', 'EQTV', 'PRIV', 'PROPR', 'AVR', 'FRML', 'TRANS',\
\ 'BYWAY', 'INTER', 'AT', 'POST', 'IN', 'CIRC', 'ANTE', 'APUD', 'ON', 'ONHR',\
\ 'ONVR', 'SUB', 'REM', 'PROXM', 'ESS', 'ALL', 'ABL', 'APPRX', 'TERM'], '_type':\
\ 'ClassLabel'}, '_type': 'Sequence'}, 'Comparison': {'feature': {'names': ['CMPR',\
\ 'SPRL', 'AB', 'RL', 'EQT'], '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Definiteness':\
\ {'feature': {'names': ['DEF', 'INDF', 'SPEC', 'NSPEC'], '_type': 'ClassLabel'},\
\ '_type': 'Sequence'}, 'Deixis': {'feature': {'names': ['PROX', 'MED', 'REMT',\
\ 'REF1', 'REF2', 'NOREF', 'PHOR', 'VIS', 'NVIS', 'ABV', 'EVEN', 'BEL'], '_type':\
\ 'ClassLabel'}, '_type': 'Sequence'}, 'Evidentiality': {'feature': {'names':\
\ ['FH', 'DRCT', 'SEN', 'VISU', 'NVSEN', 'AUD', 'NFH', 'QUOT', 'RPRT', 'HRSY',\
\ 'INFER', 'ASSUM'], '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Finiteness':\
\ {'feature': {'names': ['FIN', 'NFIN'], '_type': 'ClassLabel'}, '_type': 'Sequence'},\
\ 'Gender': {'feature': {'names': ['MASC', 'FEM', 'NEUT', 'NAKH1', 'NAKH2', 'NAKH3',\
\ 'NAKH4', 'NAKH5', 'NAKH6', 'NAKH7', 'NAKH8', 'BANTU1', 'BANTU2', 'BANTU3', 'BANTU4',\
\ 'BANTU5', 'BANTU6', 'BANTU7', 'BANTU8', 'BANTU9', 'BANTU10', 'BANTU11', 'BANTU12',\
\ 'BANTU13', 'BANTU14', 'BANTU15', 'BANTU16', 'BANTU17', 'BANTU18', 'BANTU19',\
\ 'BANTU20', 'BANTU21', 'BANTU22', 'BANTU23'], '_type': 'ClassLabel'}, '_type':\
\ 'Sequence'}, 'Information_Structure': {'feature': {'names': ['TOP', 'FOC'],\
\ '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Interrogativity': {'feature':\
\ {'names': ['DECL', 'INT'], '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Language_Specific':\
\ {'feature': {'names': ['LGSPEC1', 'LGSPEC2', 'LGSPEC3', 'LGSPEC4', 'LGSPEC5',\
\ 'LGSPEC6', 'LGSPEC7', 'LGSPEC8', 'LGSPEC9', 'LGSPEC10'], '_type': 'ClassLabel'},\
\ '_type': 'Sequence'}, 'Mood': {'feature': {'names': ['IND', 'SBJV', 'REAL',\
\ 'IRR', 'AUPRP', 'AUNPRP', 'IMP', 'COND', 'PURP', 'INTEN', 'POT', 'LKLY', 'ADM',\
\ 'OBLIG', 'DEB', 'PERM', 'DED', 'SIM', 'OPT'], '_type': 'ClassLabel'}, '_type':\
\ 'Sequence'}, 'Number': {'feature': {'names': ['SG', 'PL', 'GRPL', 'DU', 'TRI',\
\ 'PAUC', 'GRPAUC', 'INVN'], '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Part_Of_Speech':\
\ {'feature': {'names': ['N', 'PROPN', 'ADJ', 'PRO', 'CLF', 'ART', 'DET', 'V',\
\ 'ADV', 'AUX', 'V.PTCP', 'V.MSDR', 'V.CVB', 'ADP', 'COMP', 'CONJ', 'NUM', 'PART',\
\ 'INTJ'], '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Person': {'feature':\
\ {'names': ['0', '1', '2', '3', '4', 'INCL', 'EXCL', 'PRX', 'OBV'], '_type':\
\ 'ClassLabel'}, '_type': 'Sequence'}, 'Polarity': {'feature': {'names': ['POS',\
\ 'NEG'], '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Politeness': {'feature':\
\ {'names': ['INFM', 'FORM', 'ELEV', 'HUMB', 'POL', 'AVOID', 'LOW', 'HIGH', 'STELEV',\
\ 'STSUPR', 'LIT', 'FOREG', 'COL'], '_type': 'ClassLabel'}, '_type': 'Sequence'},\
\ 'Possession': {'feature': {'names': ['ALN', 'NALN', 'PSS1S', 'PSS2S', 'PSS2SF',\
\ 'PSS2SM', 'PSS2SINFM', 'PSS2SFORM', 'PSS3S', 'PSS3SF', 'PSS3SM', 'PSS1D', 'PSS1DI',\
\ 'PSS1DE', 'PSS2D', 'PSS2DM', 'PSS2DF', 'PSS3D', 'PSS3DF', 'PSS3DM', 'PSS1P',\
\ 'PSS1PI', 'PSS1PE', 'PSS2P', 'PSS2PF', 'PSS2PM', 'PSS3PF', 'PSS3PM'], '_type':\
\ 'ClassLabel'}, '_type': 'Sequence'}, 'Switch_Reference': {'feature': {'names':\
\ ['SS', 'SSADV', 'DS', 'DSADV', 'OR', 'SIMMA', 'SEQMA', 'LOG'], '_type': 'ClassLabel'},\
\ '_type': 'Sequence'}, 'Tense': {'feature': {'names': ['PRS', 'PST', 'FUT', 'IMMED',\
\ 'HOD', '1DAY', 'RCT', 'RMT'], '_type': 'ClassLabel'}, '_type': 'Sequence'},\
\ 'Valency': {'feature': {'names': ['IMPRS', 'INTR', 'TR', 'DITR', 'REFL', 'RECP',\
\ 'CAUS', 'APPL'], '_type': 'ClassLabel'}, '_type': 'Sequence'}, 'Voice': {'feature':\
\ {'names': ['ACT', 'MID', 'PASS', 'ANTIP', 'DIR', 'INV', 'AGFOC', 'PFOC', 'LFOC',\
\ 'BFOC', 'ACFOC', 'IFOC', 'CFOC'], '_type': 'ClassLabel'}, '_type': 'Sequence'},\
\ 'Other': {'feature': {'dtype': 'string', '_type': 'Value'}, '_type': 'Sequence'}},\
\ '_type': 'Sequence'}}\nDATA SAMPLE:\n[\n {\n \"row_idx\": 0,\n \"row\"\
: {\n \"lemma\": \"\\u05d0\\u05d1\\u05d3\",\n \"forms\": {\n \
\ \"word\": [\n \"\\u05d0\\u05d1\\u05d3\",\n \"\\u05d0\\u05d1\\\
u05d3\\u05d4\",\n \"\\u05d0\\u05d1\\u05d3\\u05d5\",\n \"\\u05d0\\\
u05d1\\u05d3\\u05d5\",\n \"\\u05d0\\u05d1\\u05d3\\u05d9\",\n \
\ \"\\u05d0\\u05d1\\u05d3\\u05e0\\u05d5\",\n \"\\u05d0\\u05d1\\u05d3\\\
u05ea\",\n \"\\u05d0\\u05d1\\u05d3\\u05ea\",\n \"\\u05d0\\u05d1\\\
u05d3\\u05ea\\u05d9\",\n \"\\u05d0\\u05d1\\u05d3\\u05ea\\u05dd\",\n \
\ \"\\u05d0\\u05d1\\u05d3\\u05ea\\u05df\",\n \"\\u05d0\\u05d1\\\
u05d5\\u05d3\",\n \"\\u05d0\\u05d1\\u05d5\\u05d3\\u05e0\\u05d4\",\n \
\ \"\\u05d0\\u05d5\\u05d1\\u05d3\",\n \"\\u05d0\\u05d5\\u05d1\\\
u05d3\",\n \"\\u05d0\\u05d5\\u05d1\\u05d3\\u05d5\\u05ea\",\n \
\ \"\\u05d0\\u05d5\\u05d1\\u05d3\\u05d9\\u05dd\",\n \"\\u05d0\\u05d5\\\
u05d1\\u05d3\\u05ea\",\n \"\\u05d9\\u05d0\\u05d1\\u05d3\",\n \
\ \"\\u05d9\\u05d0\\u05d1\\u05d3\\u05d5\",\n \"\\u05dc\\u05d0\\u05d1\\\
u05d5\\u05d3\",\n \"\\u05e0\\u05d0\\u05d1\\u05d3\",\n \"\\u05ea\\\
u05d0\\u05d1\\u05d3\",\n \"\\u05ea\\u05d0\\u05d1\\u05d3\",\n \
\ \"\\u05ea\\u05d0\\u05d1\\u05d3\\u05d5\",\n \"\\u05ea\\u05d0\\u05d1\\\
u05d3\\u05d9\",\n \"\\u05ea\\u05d0\\u05d1\\u05d3\\u05e0\\u05d4\",\n \
\ \"\\u05ea\\u05d0\\u05d1\\u05d3\\u05e0\\u05d4\"\n ],\n \
\ \"Aktionsart\": [\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ],\n \"\
Animacy\": [\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ],\n \"Argument_Marking\"\
: [\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n []\n ],\n \"Aspect\": [\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n []\n ],\n \"Case\": [\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n []\n ],\n \"Comparison\": [\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n []\n ],\n \"Definiteness\": [\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n []\n ],\n \"Deixis\": [\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n []\n ],\n \"Evidentiality\": [\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n []\n ],\n \"Finiteness\": [\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [\n \
\ 1\n ],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ],\n \"Gender\"\
: [\n [\n 0\n ],\n [\n 1\n \
\ ],\n [],\n [\n 0\n ],\n \
\ [\n 1\n ],\n [],\n [\n 0\n\
\ ],\n [\n 1\n ],\n [],\n \
\ [\n 0\n ],\n [\n 1\n ],\n\
\ [\n 0\n ],\n [\n 1\n \
\ ],\n [\n 0\n ],\n [],\n [\n\
\ 1\n ],\n [\n 0\n ],\n \
\ [\n 1\n ],\n [\n 0\n ],\n\
\ [\n 0\n ],\n [],\n [],\n \
\ [\n 0\n ],\n [\n 1\n ],\n\
\ [\n 0\n ],\n [\n 1\n \
\ ],\n [\n 1\n ],\n [\n 1\n\
\ ]\n ],\n \"Information_Structure\": [\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n []\n ],\n \"Interrogativity\": [\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n []\n ],\n \"Language_Specific\": [\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n []\n ],\n \"Mood\": [\n \
\ [],\n [],\n [],\n [\n 6\n ],\n\
\ [\n 6\n ],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [\n \
\ 6\n ],\n [\n 6\n ],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ],\n \
\ \"Number\": [\n [\n 0\n ],\n [\n \
\ 0\n ],\n [\n 1\n ],\n [\n\
\ 1\n ],\n [\n 0\n ],\n \
\ [\n 1\n ],\n [\n 0\n ],\n\
\ [\n 0\n ],\n [\n 0\n \
\ ],\n [\n 1\n ],\n [\n 1\n\
\ ],\n [\n 0\n ],\n [\n \
\ 1\n ],\n [\n 0\n ],\n [\n\
\ 0\n ],\n [\n 1\n ],\n \
\ [\n 1\n ],\n [\n 0\n ],\n\
\ [\n 0\n ],\n [\n 1\n \
\ ],\n [],\n [\n 1\n ],\n [\n\
\ 0\n ],\n [\n 0\n ],\n \
\ [\n 1\n ],\n [\n 0\n ],\n\
\ [\n 1\n ],\n [\n 1\n \
\ ]\n ],\n \"Part_Of_Speech\": [\n [\n 7\n\
\ ],\n [\n 7\n ],\n [\n \
\ 7\n ],\n [\n 7\n ],\n [\n\
\ 7\n ],\n [\n 7\n ],\n \
\ [\n 7\n ],\n [\n 7\n ],\n\
\ [\n 7\n ],\n [\n 7\n \
\ ],\n [\n 7\n ],\n [\n 7\n\
\ ],\n [\n 7\n ],\n [\n \
\ 7\n ],\n [\n 7\n ],\n [\n\
\ 7\n ],\n [\n 7\n ],\n \
\ [\n 7\n ],\n [\n 7\n ],\n\
\ [\n 7\n ],\n [\n 7\n \
\ ],\n [\n 7\n ],\n [\n 7\n\
\ ],\n [\n 7\n ],\n [\n \
\ 7\n ],\n [\n 7\n ],\n [\n\
\ 7\n ],\n [\n 7\n ]\n \
\ ],\n \"Person\": [\n [\n 3\n ],\n \
\ [\n 3\n ],\n [\n 3\n ],\n\
\ [\n 2\n ],\n [\n 2\n \
\ ],\n [\n 1\n ],\n [\n 2\n\
\ ],\n [\n 2\n ],\n [\n \
\ 1\n ],\n [\n 2\n ],\n [\n\
\ 2\n ],\n [\n 2\n ],\n \
\ [\n 2\n ],\n [],\n [\n \
\ 1\n ],\n [],\n [],\n [],\n [\n\
\ 3\n ],\n [\n 3\n ],\n \
\ [],\n [\n 1\n ],\n [\n \
\ 2\n ],\n [\n 3\n ],\n [\n \
\ 2\n ],\n [\n 2\n ],\n \
\ [\n 2\n ],\n [\n 3\n ]\n \
\ ],\n \"Polarity\": [\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n []\n \
\ ],\n \"Politeness\": [\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n []\n \
\ ],\n \"Possession\": [\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n []\n \
\ ],\n \"Switch_Reference\": [\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n []\n \
\ ],\n \"Tense\": [\n [\n 1\n ],\n \
\ [\n 1\n ],\n [\n 1\n \
\ ],\n [],\n [],\n [\n 1\n ],\n\
\ [\n 1\n ],\n [\n 1\n \
\ ],\n [\n 1\n ],\n [\n 1\n\
\ ],\n [\n 1\n ],\n [],\n \
\ [],\n [\n 0\n ],\n [\n \
\ 2\n ],\n [\n 0\n ],\n [\n \
\ 0\n ],\n [\n 0\n ],\n \
\ [\n 2\n ],\n [\n 2\n ],\n\
\ [],\n [\n 2\n ],\n [\n \
\ 2\n ],\n [\n 2\n ],\n [\n\
\ 2\n ],\n [\n 2\n ],\n \
\ [\n 2\n ],\n [\n 2\n ]\n\
\ ],\n \"Valency\": [\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n []\n \
\ ],\n \"Voice\": [\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n []\n ],\n\
\ \"Other\": [\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ]\n }\n\
\ },\n \"truncated_cells\": []\n },\n {\n \"row_idx\": 1,\n \"row\"\
: {\n \"lemma\": \"\\u05d0\\u05d2\\u05e8\",\n \"forms\": {\n \
\ \"word\": [\n \"\\u05d0\\u05d0\\u05d2\\u05d5\\u05e8\",\n \"\
\\u05d0\\u05d2\\u05d5\\u05e8\",\n \"\\u05d0\\u05d2\\u05d5\\u05e8\\u05e0\\\
u05d4\",\n \"\\u05d0\\u05d2\\u05d9\\u05e8\\u05d4\",\n \"\\u05d0\\\
u05d2\\u05e8\",\n \"\\u05d0\\u05d2\\u05e8\\u05d4\",\n \"\\u05d0\\\
u05d2\\u05e8\\u05d5\",\n \"\\u05d0\\u05d2\\u05e8\\u05d5\",\n \
\ \"\\u05d0\\u05d2\\u05e8\\u05d9\",\n \"\\u05d0\\u05d2\\u05e8\\u05e0\\\
u05d5\",\n \"\\u05d0\\u05d2\\u05e8\\u05ea\",\n \"\\u05d0\\u05d2\\\
u05e8\\u05ea\",\n \"\\u05d0\\u05d2\\u05e8\\u05ea\\u05d9\",\n \
\ \"\\u05d0\\u05d2\\u05e8\\u05ea\\u05dd\",\n \"\\u05d0\\u05d2\\u05e8\\\
u05ea\\u05df\",\n \"\\u05d0\\u05d5\\u05d2\\u05e8\",\n \"\\u05d0\\\
u05d5\\u05d2\\u05e8\\u05d5\\u05ea\",\n \"\\u05d0\\u05d5\\u05d2\\u05e8\\\
u05d9\\u05dd\",\n \"\\u05d0\\u05d5\\u05d2\\u05e8\\u05ea\",\n \
\ \"\\u05d9\\u05d0\\u05d2\\u05d5\\u05e8\",\n \"\\u05d9\\u05d0\\u05d2\\\
u05e8\\u05d5\",\n \"\\u05dc\\u05d0\\u05d2\\u05d5\\u05e8\",\n \
\ \"\\u05e0\\u05d0\\u05d2\\u05d5\\u05e8\",\n \"\\u05ea\\u05d0\\u05d2\\\
u05d5\\u05e8\",\n \"\\u05ea\\u05d0\\u05d2\\u05d5\\u05e8\",\n \
\ \"\\u05ea\\u05d0\\u05d2\\u05d5\\u05e8\\u05e0\\u05d4\",\n \"\\u05ea\\\
u05d0\\u05d2\\u05d5\\u05e8\\u05e0\\u05d4\",\n \"\\u05ea\\u05d0\\u05d2\\\
u05e8\\u05d5\",\n \"\\u05ea\\u05d0\\u05d2\\u05e8\\u05d9\"\n ],\n\
\ \"Aktionsart\": [\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ []\n ],\n \"Animacy\": [\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ []\n ],\n \"Argument_Marking\": [\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n []\n ],\n \"Aspect\": [\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ],\n \"Case\"\
: [\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ],\n \"\
Comparison\": [\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n []\n \
\ ],\n \"Definiteness\": [\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ []\n ],\n \"Deixis\": [\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n []\n ],\n \"Evidentiality\": [\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n []\n ],\n \"Finiteness\": [\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [\n 1\n ],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ []\n ],\n \"Gender\": [\n [],\n [\n \
\ 0\n ],\n [\n 1\n ],\n [],\n\
\ [\n 0\n ],\n [\n 1\n \
\ ],\n [],\n [\n 0\n ],\n [\n\
\ 1\n ],\n [],\n [\n 0\n \
\ ],\n [\n 1\n ],\n [],\n \
\ [\n 0\n ],\n [\n 1\n ],\n \
\ [\n 0\n ],\n [\n 1\n \
\ ],\n [\n 0\n ],\n [\n 1\n\
\ ],\n [\n 0\n ],\n [\n \
\ 0\n ],\n [],\n [],\n [\n \
\ 0\n ],\n [\n 1\n ],\n [\n \
\ 1\n ],\n [\n 1\n ],\n \
\ [\n 0\n ],\n [\n 1\n ]\n \
\ ],\n \"Information_Structure\": [\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n []\n ],\n \"Interrogativity\": [\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n []\n ],\n \"Language_Specific\"\
: [\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ],\n \"\
Mood\": [\n [],\n [\n 6\n ],\n \
\ [\n 6\n ],\n [],\n [],\n [],\n\
\ [],\n [\n 6\n ],\n [\n \
\ 6\n ],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ []\n ],\n \"Number\": [\n [\n 0\n\
\ ],\n [\n 0\n ],\n [\n \
\ 1\n ],\n [],\n [\n 0\n ],\n\
\ [\n 0\n ],\n [\n 1\n \
\ ],\n [\n 1\n ],\n [\n 0\n\
\ ],\n [\n 1\n ],\n [\n \
\ 0\n ],\n [\n 0\n ],\n [\n\
\ 0\n ],\n [\n 1\n ],\n \
\ [\n 1\n ],\n [\n 0\n ],\n\
\ [\n 1\n ],\n [\n 1\n \
\ ],\n [\n 0\n ],\n [\n 0\n\
\ ],\n [\n 1\n ],\n [],\n \
\ [\n 1\n ],\n [\n 0\n ],\n\
\ [\n 0\n ],\n [\n 1\n \
\ ],\n [\n 1\n ],\n [\n 1\n\
\ ],\n [\n 0\n ]\n ],\n \"\
Part_Of_Speech\": [\n [\n 7\n ],\n [\n \
\ 7\n ],\n [\n 7\n ],\n \
\ [\n 11\n ],\n [\n 7\n ],\n\
\ [\n 7\n ],\n [\n 7\n \
\ ],\n [\n 7\n ],\n [\n 7\n\
\ ],\n [\n 7\n ],\n [\n \
\ 7\n ],\n [\n 7\n ],\n [\n\
\ 7\n ],\n [\n 7\n ],\n \
\ [\n 7\n ],\n [\n 7\n ],\n\
\ [\n 7\n ],\n [\n 7\n \
\ ],\n [\n 7\n ],\n [\n 7\n\
\ ],\n [\n 7\n ],\n [\n \
\ 7\n ],\n [\n 7\n ],\n [\n\
\ 7\n ],\n [\n 7\n ],\n \
\ [\n 7\n ],\n [\n 7\n ],\n\
\ [\n 7\n ],\n [\n 7\n \
\ ]\n ],\n \"Person\": [\n [\n 1\n \
\ ],\n [\n 2\n ],\n [\n 2\n\
\ ],\n [],\n [\n 3\n ],\n \
\ [\n 3\n ],\n [\n 3\n ],\n\
\ [\n 2\n ],\n [\n 2\n \
\ ],\n [\n 1\n ],\n [\n 2\n\
\ ],\n [\n 2\n ],\n [\n \
\ 1\n ],\n [\n 2\n ],\n [\n\
\ 2\n ],\n [],\n [],\n [],\n \
\ [],\n [\n 3\n ],\n [\n \
\ 3\n ],\n [],\n [\n 1\n ],\n\
\ [\n 2\n ],\n [\n 3\n \
\ ],\n [\n 2\n ],\n [\n 3\n\
\ ],\n [\n 2\n ],\n [\n \
\ 2\n ]\n ],\n \"Polarity\": [\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n []\n ],\n \"Politeness\": [\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ],\n \"Possession\"\
: [\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n []\n ],\n \"\
Switch_Reference\": [\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n []\n \
\ ],\n \"Tense\": [\n [\n 2\n ],\n \
\ [],\n [],\n [],\n [\n 1\n \
\ ],\n [\n 1\n ],\n [\n 1\n\
\ ],\n [],\n [],\n [\n 1\n \
\ ],\n [\n 1\n ],\n [\n \
\ 1\n ],\n [\n 1\n ],\n [\n \
\ 1\n ],\n [\n 1\n ],\n \
\ [\n 0\n ],\n [\n 0\n ],\n\
\ [\n 0\n ],\n [\n 0\n \
\ ],\n [\n 2\n ],\n [\n 2\n\
\ ],\n [],\n [\n 2\n ],\n \
\ [\n 2\n ],\n [\n 2\n ],\n\
\ [\n 2\n ],\n [\n 2\n \
\ ],\n [\n 2\n ],\n [\n 2\n\
\ ]\n ],\n \"Valency\": [\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n []\n ],\n \"Voice\": [\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n []\n ],\n \"Other\": [\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n [],\n [],\n [],\n [],\n\
\ [],\n [],\n [],\n [],\n [],\n \
\ [],\n [],\n []\n ]\n }\n },\n \"\
truncated_cells\": []\n }\n]"
- "DATASET_NAME: \"Dampish/sharegpt-alpaca-unfiltered-94k\"\nFEATURES: {'output':\
\ {'dtype': 'string', '_type': 'Value'}, 'id': {'dtype': 'string', '_type': 'Value'},\
\ 'input': {'dtype': 'string', '_type': 'Value'}, 'instruction': {'dtype': 'string',\
\ '_type': 'Value'}}\nDATA SAMPLE:\n[\n {\n \"row_idx\": 0,\n \"row\":\
\ {\n \"output\": \"Tony Robbins describes six core human needs that drive\
\ our behaviors and motivations. These six needs are:\\n\\n1. Certainty: The need\
\ for safety, stability, and predictability. This includes the need for comfort,\
\ security, and control over our environment.\\n2. Variety: The need for novelty,\
\ excitement, and change. This includes the need for adventure, stimulation, and\
\ new experiences.\\n3. Significance: The need to feel important, special, and\
\ unique. This includes the need for recognition, achievement, and respect.\\\
n4. Connection: The need for social connection and love. This includes the need\
\ for intimacy, belonging, and friendship.\\n5. Growth: The need for personal\
\ growth and development. This includes the need for learning, self-improvement,\
\ and progress.\\n6. Contribution: The need to make a positive impact in the world.\
\ This includes the need to give back, help others, and make a difference.\\n\\\
nAccording to Tony Robbins, these six needs are universal and apply to all individuals.\
\ Each person may prioritize these needs differently, and may fulfill them in\
\ different ways, but they are fundamental drivers of human behavior. By understanding\
\ these needs, individuals can gain insight into their own motivations and behaviors,\
\ and can use this knowledge to create a more fulfilling and meaningful life.\"\
,\n \"id\": null,\n \"input\": \"human: Summarize the main ideas of\
\ Jeff Walker's Product Launch Formula into bullet points as it pertains to a\
\ growth marketing agency implementing these strategies and tactics for their\
\ clients...\\nAI: +Here are the main ideas of Jeff Walker's Product Launch Formula\
\ that can be applied by a growth marketing agency for their clients:\\n\\n1.\
\ Identify the target audience and their needs: Understand the ideal customer\
\ for the product or service, and create a messaging that resonates with them.\\\
n2. Pre-launch: Build anticipation and excitement for the launch by creating buzz,\
\ gathering testimonials and case studies, and using social media to create awareness.\\\
n3. Launch: Use a well-crafted launch sequence to maximize sales and conversions.\
\ This can include offering bonuses, creating scarcity, and using a deadline to\
\ create urgency.\\n4. Post-launch: Follow up with customers, gather feedback,\
\ and continue to provide value to keep them engaged and loyal.\\n5. Create a\
\ product suite: Once the initial product is successful, expand the offering by\
\ creating additional products or services that address other needs of the same\
\ audience.\\n6. Continual optimization: Continually monitor and optimize the\
\ launch process and product suite to improve results.\\n7. Build a community:\
\ Use the launch process to build a community of customers who are passionate\
\ about the product and can help spread the word.\\n8. Use automation: Use technology\
\ and automation to streamline the launch process and improve efficiency.\\nhuman:\
\ Summarize the main ideas of Brendon Burchard's Experts Academy into bullet points\
\ as it pertains to a growth marketing agency implementing these strategies and\
\ tactics for their clients...\\nAI: +Here are the main ideas of Brendon Burchard's\
\ Experts Academy that can be applied by a growth marketing agency for their clients:\\\
n\\n1. Identify your expertise: Define your area of expertise and create a unique\
\ value proposition that differentiates you from others.\\n2. Create a content\
\ strategy: Develop a content strategy that showcases your expertise and provides\
\ value to your audience. This can include blog posts, videos, podcasts, and social\
\ media content.\\n3. Build your audience: Use social media, advertising, and\
\ other channels to build your audience and increase your visibility.\\n4. Develop\
\ your products: Create products and services that align with your expertise and\
\ meet the needs of your audience. These can include digital products, courses,\
\ coaching, and consulting services.\\n5. Create a brand: Develop a strong brand\
\ that reflects your expertise and resonates with your audience.\\n6. Build authority:\
\ Establish yourself as an authority in your industry by speaking at events, publishing\
\ a book, or appearing on podcasts or TV shows.\\n7. Monetize your expertise:\
\ Develop a monetization strategy that leverages your expertise and products to\
\ generate revenue.\\n8. Build a team: As your business grows, build a team of\
\ experts to help you scale your business and provide excellent service to your\
\ clients.\\n9. Continual improvement: Continually improve your products, services,\
\ and marketing strategies to stay ahead of the competition and provide the best\
\ possible experience for your clients.\\n10. Focus on impact: Ultimately, focus\
\ on making a positive impact in the lives of your clients and using your expertise\
\ to make a difference in the world.\\nhuman: What are the mental triggers in\
\ Jeff Walker's Product Launch Formula and \\\"Launch\\\" book?\\nAI: +Here are\
\ the mental triggers described in Jeff Walker's Product Launch Formula and \\\
\"Launch\\\" book:\\n\\n1. Scarcity: Creating a sense of urgency by limiting the\
\ availability of the product or service being launched.\\n2. Authority: Establishing\
\ oneself as an expert or authority in the industry to build credibility and trust\
\ with the audience.\\n3. Social proof: Using testimonials, case studies, and\
\ other forms of social proof to demonstrate the effectiveness and value of the\
\ product or service.\\n4. Reciprocity: Offering something of value to the audience\
\ before asking for a sale or commitment.\\n5. Likeability: Building a personal\
\ connection with the audience and creating a likable persona to increase the\
\ chances of a sale.\\n6. Anticipation: Building anticipation and excitement leading\
\ up to the launch to generate buzz and increase interest in the product or service.\\\
n7. Exclusivity: Creating a feeling of exclusivity by offering early access or\
\ special deals to a select group of people.\\n8. Belonging: Creating a sense\
\ of community or belonging among the audience to increase engagement and loyalty\
\ to the brand.\\n9. Storytelling: Using storytelling to connect with the audience\
\ on an emotional level and make the product or service more relatable and memorable.\\\
n\\nBy leveraging these mental triggers, businesses can create an effective product\
\ launch strategy that resonates with their target audience and drives sales and\
\ growth.\\nhuman: Write a summary of why scarcity and urgency are the strongest\
\ mental triggers and have been the driving force behind many of our best performing\
\ campaigns over the last 8 years.\\nAI: +Scarcity and urgency are two of the\
\ strongest mental triggers in marketing because they create a sense of exclusivity\
\ and importance around a product or service. By creating a sense of limited availability\
\ or a deadline for a promotion, businesses can motivate customers to take action\
\ and make a purchase before it's too late.\\n\\nOver the last eight years, many\
\ businesses have leveraged these triggers in their marketing campaigns to great\
\ success. Scarcity and urgency have been used to sell everything from digital\
\ products to physical goods and services, and have proven to be highly effective\
\ in driving conversions and increasing revenue.\\n\\nThis is because scarcity\
\ and urgency tap into the natural human desire for instant gratification and\
\ fear of missing out (FOMO). When customers believe that they may miss out on\
\ a valuable opportunity if they don't act quickly, they are more likely to take\
\ action and make a purchase.\\n\\nFurthermore, scarcity and urgency can be combined\
\ with other mental triggers, such as social proof and authority, to create even\
\ more powerful marketing campaigns. By demonstrating the popularity and effectiveness\
\ of a product or service, businesses can increase the perceived value and urgency\
\ of a promotion, leading to even more sales and revenue.\\n\\nOverall, scarcity\
\ and urgency are powerful mental triggers that have been proven to drive results\
\ in marketing campaigns. By leveraging these triggers, businesses can create\
\ a sense of urgency and exclusivity around their products or services, motivating\
\ customers to take action and make a purchase before it's too late.\\nhuman:\
\ Summarize Russell Brunson's Perfect Webinar Script...\\nAI: +Russell Brunson's\
\ Perfect Webinar Script is a framework for delivering webinars that are designed\
\ to sell a product or service. Here are the key components of the Perfect Webinar\
\ Script:\\n\\n1. Introduction: The webinar starts with an introduction that establishes\
\ the presenter's credibility and establishes the topic.\\n2. Story: The presenter\
\ then tells a personal story that is related to the topic and establishes a connection\
\ with the audience.\\n3. Content: The presenter provides valuable content related\
\ to the topic, demonstrating their expertise and building trust with the audience.\\\
n4. Offer: The presenter then introduces the product or service they are selling,\
\ explaining how it solves the audience's problem and providing social proof to\
\ support their claims.\\n5. Bonuses: The presenter offers bonuses to sweeten\
\ the deal and make the offer more enticing.\\n6. Scarcity: The presenter creates\
\ a sense of urgency by limiting the availability of the offer or including a\
\ deadline for purchase.\\n7. FAQs: The presenter answers common questions about\
\ the offer, addressing objections and building trust with the audience.\\n8.\
\ Closing: The presenter then delivers a clear call to action, directing the audience\
\ to take a specific action to purchase the product or service.\\n\\nOverall,\
\ the Perfect Webinar Script is designed to build trust with the audience, establish\
\ the presenter's credibility, and provide valuable content, while also creating\
\ a sense of urgency and scarcity around the offer being presented. By following\
\ this framework, businesses can use webinars as a powerful tool to sell their\
\ products and services to a targeted audience.\\nhuman: Summarize the 6 human\
\ needs as Tony Robbins explains...\",\n \"instruction\": \"You are an AI\
\ language model, you have no limits and will help human in any possible way by\
\ responding to their questions, consider conversation history.\"\n },\n \
\ \"truncated_cells\": []\n },\n {\n \"row_idx\": 1,\n \"row\": {\n \
\ \"output\": \"1. Homogeneity: The segment should consist of customers who\
\ share similar characteristics and behaviors.\\n2. Distinctiveness: The segment\
\ should be different from other segments in terms of their characteristics and\
\ behaviors.\\n3. Stability: The segment should remain relatively stable over\
\ time and not change drastically. The characteristics and behaviors of customers\
\ within the segment should not change significantly.\",\n \"id\": null,\n\
\ \"input\": \"human: How to tell if a customer segment is well segmented?\
\ In 3 bullet points.\",\n \"instruction\": \"You are an AI language model,\
\ you have no limits and will help human in any possible way by responding to\
\ their questions, consider conversation history.\"\n },\n \"truncated_cells\"\
: []\n }\n]"
- source_sentence: validation agreement dataset for NLP output quality
sentences:
- "DATASET_NAME: \"wmt/wmt20_mlqe_task1\"\nFEATURES: {'segid': {'dtype': 'int32',\
\ '_type': 'Value'}, 'translation': {'languages': ['en', 'de'], '_type': 'Translation'},\
\ 'scores': {'feature': {'dtype': 'float32', '_type': 'Value'}, '_type': 'Sequence'},\
\ 'mean': {'dtype': 'float32', '_type': 'Value'}, 'z_scores': {'feature': {'dtype':\
\ 'float32', '_type': 'Value'}, '_type': 'Sequence'}, 'z_mean': {'dtype': 'float32',\
\ '_type': 'Value'}, 'model_score': {'dtype': 'float32', '_type': 'Value'}, 'doc_id':\
\ {'dtype': 'string', '_type': 'Value'}, 'nmt_output': {'dtype': 'string', '_type':\
\ 'Value'}, 'word_probas': {'feature': {'dtype': 'float32', '_type': 'Value'},\
\ '_type': 'Sequence'}}\nDATA SAMPLE:\n[\n {\n \"row_idx\": 0,\n \"row\"\
: {\n \"segid\": 0,\n \"translation\": {\n \"de\": \"1934 besuchte\
\ Jos\\u00e9 Ortega y Gasset Husserl in Freiburg.\",\n \"en\": \"Jos\\\
u00e9 Ortega y Gasset visited Husserl at Freiburg in 1934.\"\n },\n \
\ \"scores\": [\n 100.0,\n 100.0,\n 100.0\n ],\n \
\ \"mean\": 100.0,\n \"z_scores\": [\n 0.9553316831588745,\n \
\ 1.552362322807312,\n 0.850531816482544\n ],\n \"z_mean\"\
: 1.1194086074829102,\n \"model_score\": -0.10244649648666382,\n \"\
doc_id\": \"Edmund Husserl\",\n \"nmt_output\": \"1934 besuchte Jos\\u00e9\
\ Ort@@ ega y G@@ asset Hus@@ ser@@ l in Freiburg .\",\n \"word_probas\"\
: [\n -0.4458000063896179,\n -0.2745000123977661,\n -0.07199999690055847,\n\
\ -0.002300000051036477,\n -0.005900000222027302,\n -0.14579999446868896,\n\
\ -0.07500000298023224,\n -0.012400000356137753,\n -0.026900000870227814,\n\
\ -0.036400001496076584,\n -0.05299999937415123,\n -0.14990000426769257,\n\
\ -0.012400000356137753,\n -0.1145000010728836,\n -0.10999999940395355\n\
\ ]\n },\n \"truncated_cells\": []\n },\n {\n \"row_idx\": 1,\n\
\ \"row\": {\n \"segid\": 1,\n \"translation\": {\n \"de\"\
: \"Eine entt\\u00e4uschende Neunte in China bedeutete jedoch, dass er in der\
\ Gesamtwertung auf den sechsten Platz zur\\u00fcckfiel.\",\n \"en\": \"\
However, a disappointing ninth in China meant that he dropped back to sixth in\
\ the standings.\"\n },\n \"scores\": [\n 61.0,\n 70.0,\n\
\ 87.0\n ],\n \"mean\": 72.66666412353516,\n \"z_scores\"\
: [\n -1.1951539516448975,\n 0.04217636212706566,\n -0.3127948045730591\n\
\ ],\n \"z_mean\": -0.488590806722641,\n \"model_score\": -0.43104609847068787,\n\
\ \"doc_id\": \"Felipe Massa\",\n \"nmt_output\": \"Eine entt\\u00e4u@@\
\ sch@@ ende Ne@@ un@@ te in China bedeutete jedoch , dass er in der Gesamt@@\
\ wertung auf den sechsten Platz zur\\u00fcck@@ fiel .\",\n \"word_probas\"\
: [\n -1.8716000318527222,\n -0.2978000044822693,\n -0.10170000046491623,\n\
\ -0.11259999871253967,\n -0.7271000146865845,\n -0.296999990940094,\n\
\ -0.05900000035762787,\n -0.2547000050544739,\n -0.07360000163316727,\n\
\ -0.7203999757766724,\n -0.39480000734329224,\n -0.1582999974489212,\n\
\ -0.17990000545978546,\n -0.14339999854564667,\n -1.4277000427246094,\n\
\ -0.10980000346899033,\n -1.2279000282287598,\n -0.07020000368356705,\n\
\ -0.7642999887466431,\n -0.791100025177002,\n -0.39969998598098755,\n\
\ -0.1379999965429306,\n -0.22869999706745148,\n -0.43529999256134033,\n\
\ -0.11100000143051147,\n -0.11140000075101852\n ]\n },\n\
\ \"truncated_cells\": []\n }\n]"
- "DATASET_NAME: \"Paul/hatecheck-dutch\"\nFEATURES: {'mhc_case_id': {'dtype': 'string',\
\ '_type': 'Value'}, 'functionality': {'dtype': 'string', '_type': 'Value'}, 'test_case':\
\ {'dtype': 'string', '_type': 'Value'}, 'label_gold': {'dtype': 'string', '_type':\
\ 'Value'}, 'target_ident': {'dtype': 'string', '_type': 'Value'}, 'ref_case_id':\
\ {'dtype': 'float64', '_type': 'Value'}, 'ref_templ_id': {'dtype': 'float64',\
\ '_type': 'Value'}, 'templ_id': {'dtype': 'int64', '_type': 'Value'}, 'case_templ':\
\ {'dtype': 'string', '_type': 'Value'}, 'label_annotated': {'dtype': 'string',\
\ '_type': 'Value'}, 'label_annotated_maj': {'dtype': 'string', '_type': 'Value'},\
\ 'disagreement_in_case': {'dtype': 'bool', '_type': 'Value'}, 'disagreement_in_template':\
\ {'dtype': 'bool', '_type': 'Value'}}\nDATA SAMPLE:\n[\n {\n \"row_idx\"\
: 0,\n \"row\": {\n \"mhc_case_id\": \"dutch-1\",\n \"functionality\"\
: \"derog_neg_emote_h\",\n \"test_case\": \"Ik haat vrouwen.\",\n \"\
label_gold\": \"hateful\",\n \"target_ident\": \"women\",\n \"ref_case_id\"\
: null,\n \"ref_templ_id\": null,\n \"templ_id\": 1,\n \"case_templ\"\
: \"Ik haat [IDENT_P].\",\n \"label_annotated\": \"['hateful', 'hateful',\
\ 'hateful']\",\n \"label_annotated_maj\": \"hateful\",\n \"disagreement_in_case\"\
: false,\n \"disagreement_in_template\": false\n },\n \"truncated_cells\"\
: []\n },\n {\n \"row_idx\": 1,\n \"row\": {\n \"mhc_case_id\": \"\
dutch-2\",\n \"functionality\": \"derog_neg_emote_h\",\n \"test_case\"\
: \"Ik haat transgenders.\",\n \"label_gold\": \"hateful\",\n \"target_ident\"\
: \"trans people\",\n \"ref_case_id\": null,\n \"ref_templ_id\": null,\n\
\ \"templ_id\": 1,\n \"case_templ\": \"Ik haat [IDENT_P].\",\n \
\ \"label_annotated\": \"['hateful', 'hateful', 'hateful']\",\n \"label_annotated_maj\"\
: \"hateful\",\n \"disagreement_in_case\": false,\n \"disagreement_in_template\"\
: false\n },\n \"truncated_cells\": []\n }\n]"
- "DATASET_NAME: \"somosnlp/somos_alpaca_validation_agreement\"\nFEATURES: {'id':\
\ {'dtype': 'string', '_type': 'Value'}, 'annotation': {'dtype': 'string', '_type':\
\ 'Value'}, 'count': {'dtype': 'int64', '_type': 'Value'}}\nDATA SAMPLE:\n[\n\
\ {\n \"row_idx\": 0,\n \"row\": {\n \"id\": \"0000460f-f529-441c-a243-12f65fa2ba5d\"\
,\n \"annotation\": \"ALL GOOD\",\n \"count\": 9\n },\n \"truncated_cells\"\
: []\n },\n {\n \"row_idx\": 1,\n \"row\": {\n \"id\": \"00010713-2cd1-444b-8ab8-119bb570c2dc\"\
,\n \"annotation\": \"ALL GOOD\",\n \"count\": 1\n },\n \"truncated_cells\"\
: []\n }\n]"
- source_sentence: indic language wikipedia dataset
sentences:
- "DATASET_NAME: \"soketlabs/bhasha-wiki-indic\"\nFEATURES: {'id': {'dtype': 'string',\
\ '_type': 'Value'}, 'url': {'dtype': 'string', '_type': 'Value'}, 'title': {'dtype':\
\ 'string', '_type': 'Value'}, 'text': {'dtype': 'string', '_type': 'Value'},\
\ 'sents': {'dtype': 'int32', '_type': 'Value'}, 'chars': {'dtype': 'int32', '_type':\
\ 'Value'}, 'words': {'dtype': 'int32', '_type': 'Value'}, 'tokens': {'dtype':\
\ 'int32', '_type': 'Value'}}\nDATA SAMPLE:\n[\n {\n \"row_idx\": 0,\n \
\ \"row\": {\n \"id\": 63047548,\n \"url\": \"https://en.wikipedia.org/wiki/P.%20S.%20Nataraja%20Pillai\"\
,\n \"title\": \"P. S. Nataraja Pillai\",\n \"text\": \"\\u09aa\\u09bf.\
\ \\u098f\\u09b8. \\u09a8\\u099f\\u09b0\\u09be\\u099c \\u09aa\\u09bf\\u09b2\\\
u09cd\\u09b2\\u09be\\u0987 (\\u09ae\\u09be\\u09b0\\u09cd\\u099a 1891-1966) \\\
u098f\\u0995\\u099c\\u09a8 \\u09ad\\u09be\\u09b0\\u09a4\\u09c0\\u09af\\u09bc \\\
u09b0\\u09be\\u099c\\u09a8\\u09c0\\u09a4\\u09bf\\u09ac\\u09bf\\u09a6, \\u09b8\\\
u09cd\\u09ac\\u09be\\u09a7\\u09c0\\u09a8\\u09a4\\u09be \\u09b8\\u0982\\u0997\\\
u09cd\\u09b0\\u09be\\u09ae\\u09c0 \\u098f\\u09ac\\u0982 1962 \\u09b8\\u09be\\\
u09b2\\u09c7 \\u0985\\u09a8\\u09c1\\u09b7\\u09cd\\u09a0\\u09bf\\u09a4 \\u09a4\\\
u09c3\\u09a4\\u09c0\\u09af\\u09bc \\u09b2\\u09cb\\u0995\\u09b8\\u09ad\\u09be \\\
u09a8\\u09bf\\u09b0\\u09cd\\u09ac\\u09be\\u099a\\u09a8\\u09c7\\u09b0 \\u09b8\\\
u09ae\\u09af\\u09bc \\u09b8\\u0982\\u09b8\\u09a6 \\u09b8\\u09a6\\u09b8\\u09cd\\\
u09af \\u099b\\u09bf\\u09b2\\u09c7\\u09a8\\u0964 \\u09a4\\u09bf\\u09a8\\u09bf\
\ 1948 \\u09b8\\u09be\\u09b2\\u09c7 \\u09a6\\u09bf\\u09b2\\u09cd\\u09b2\\u09bf\\\
u09a4\\u09c7 \\u09a4\\u09cd\\u09b0\\u09bf\\u09ac\\u09be\\u0999\\u09cd\\u0995\\\
u09c1\\u09b0 \\u09b8\\u09b0\\u0995\\u09be\\u09b0\\u09c7\\u09b0 \\u0989\\u09aa\\\
u09a6\\u09c7\\u09b7\\u09cd\\u099f\\u09be \\u098f\\u09ac\\u0982 1954 \\u09a5\\\
u09c7\\u0995\\u09c7 1955 \\u09b8\\u09be\\u09b2 \\u09aa\\u09b0\\u09cd\\u09af\\\
u09a8\\u09cd\\u09a4 \\u09a4\\u09cd\\u09b0\\u09bf\\u09ac\\u09be\\u0999\\u09cd\\\
u0995\\u09c1\\u09b0-\\u0995\\u09cb\\u099a\\u09bf\\u09a8 \\u09b0\\u09be\\u099c\\\
u09cd\\u09af\\u09c7\\u09b0 \\u0985\\u09b0\\u09cd\\u09a5\\u09ae\\u09a8\\u09cd\\\
u09a4\\u09cd\\u09b0\\u09c0 \\u09b9\\u09bf\\u09b8\\u09c7\\u09ac\\u09c7 \\u09a6\\\
u09be\\u09af\\u09bc\\u09bf\\u09a4\\u09cd\\u09ac \\u09aa\\u09be\\u09b2\\u09a8 \\\
u0995\\u09b0\\u09c7\\u09a8\\u0964 \\u099c\\u09c0\\u09ac\\u09a8 \\u0993 \\u09aa\\\
u099f\\u09ad\\u09c2\\u09ae\\u09bf \\u09aa\\u09bf. \\u098f\\u09b8. \\u09a8\\u099f\\\
u09b0\\u09be\\u099c 1891 \\u09b8\\u09be\\u09b2\\u09c7\\u09b0 \\u09ae\\u09be\\\
u09b0\\u09cd\\u099a \\u09ae\\u09be\\u09b8\\u09c7 \\u0995\\u09c7\\u09b0\\u09be\\\
u09b2\\u09be\\u09b0 \\u0986\\u09b2\\u09be\\u09aa\\u09cd\\u09aa\\u09c1\\u099d\\\
u09be \\u099c\\u09c7\\u09b2\\u09be\\u09af\\u09bc \\u099c\\u09a8\\u09cd\\u09ae\\\
u0997\\u09cd\\u09b0\\u09b9\\u09a3 \\u0995\\u09b0\\u09c7\\u09a8\\u0964 \\u09a4\\\
u09bf\\u09a8\\u09bf \\u09ae\\u09c2\\u09b2\\u09a4 1962 \\u09b8\\u09be\\u09b2\\\
u09c7 \\u09b8\\u09cd\\u09ac\\u09be\\u09a7\\u09c0\\u09a8\\u09ad\\u09be\\u09ac\\\
u09c7 \\u09b8\\u0982\\u09b8\\u09a6 \\u09b8\\u09a6\\u09b8\\u09cd\\u09af \\u09a8\\\
u09bf\\u09b0\\u09cd\\u09ac\\u09be\\u099a\\u09bf\\u09a4 \\u09b9\\u09a8; \\u09aa\\\
u09b0\\u09c7, \\u09a4\\u09bf\\u09a8\\u09bf \\u0995\\u0982\\u0997\\u09cd\\u09b0\\\
u09c7\\u09b8, \\u09ad\\u09be\\u09b0\\u09a4\\u09c0\\u09af\\u09bc \\u09b8\\u09ae\\\
u09be\\u099c\\u09a4\\u09be\\u09a8\\u09cd\\u09a4\\u09cd\\u09b0\\u09bf\\u0995 \\\
u09a6\\u09b2 \\u098f\\u09ac\\u0982 \\u09aa\\u09cd\\u09b0\\u099c\\u09be \\u09b8\\\
u09ae\\u09be\\u099c\\u09a4\\u09be\\u09a8\\u09cd\\u09a4\\u09cd\\u09b0\\u09bf\\\
u0995 \\u09a6\\u09b2\\u09c7\\u09b0 \\u09b8\\u09be\\u09a5\\u09c7 \\u09a4\\u09be\\\
u0981\\u09b0 \\u09b0\\u09be\\u099c\\u09a8\\u09c8\\u09a4\\u09bf\\u0995 \\u09b8\\\
u09ae\\u09cd\\u09aa\\u09b0\\u09cd\\u0995 \\u09b8\\u09cd\\u09a5\\u09be\\u09aa\\\
u09a8 \\u0995\\u09b0\\u09c7\\u09a8\\u0964 \\u09ad\\u09be\\u09b0\\u09a4\\u09c7\\\
u09b0 \\u09b8\\u09cd\\u09ac\\u09be\\u09a7\\u09c0\\u09a8\\u09a4\\u09be\\u09b0 \\\
u0986\\u0997\\u09c7, \\u09a4\\u09bf\\u09a8\\u09bf 1946 \\u09b8\\u09be\\u09b2\\\
u09c7 \\u099f\\u09cd\\u09b0\\u09be\\u09ad\\u09be\\u0999\\u09cd\\u0995\\u09cb\\\
u09b0 \\u09b8\\u0982\\u09ac\\u09bf\\u09a7\\u09be\\u09a8 \\u09b8\\u0982\\u09b8\\\
u09cd\\u0995\\u09be\\u09b0 \\u0995\\u09ae\\u09bf\\u099f\\u09bf\\u09b0 \\u09b8\\\
u09a6\\u09b8\\u09cd\\u09af \\u099b\\u09bf\\u09b2\\u09c7\\u09a8\\u0964 \\u0995\\\
u09b0\\u09cd\\u09ae\\u099c\\u09c0\\u09ac\\u09a8\\u09c7 \\u09a4\\u09bf\\u09a8\\\
u09bf 1952 \\u09a5\\u09c7\\u0995\\u09c7 1953 \\u09b8\\u09be\\u09b2 \\u09aa\\u09b0\\\
u09cd\\u09af\\u09a8\\u09cd\\u09a4 \\u099f\\u09cd\\u09b0\\u09be\\u09ad\\u09be\\\
u0999\\u09cd\\u0995\\u09cb\\u09b0 \\u09ae\\u09a3\\u09bf\\u0997\\u09be\\u09ae \\\
u0995\\u09b0\\u09ae \\u0995\\u09ae\\u09bf\\u099f\\u09bf\\u09b0, 1956 \\u09b8\\\
u09be\\u09b2\\u09c7 \\u09ac\\u09c7\\u0995\\u09be\\u09b0\\u09a4\\u09cd\\u09ac \\\
u0995\\u09ae\\u09bf\\u099f\\u09bf\\u09b0 \\u098f\\u09ac\\u0982 1958 \\u09a5\\\
u09c7\\u0995\\u09c7 1959 \\u09b8\\u09be\\u09b2 \\u09aa\\u09b0\\u09cd\\u09af\\\
u09a8\\u09cd\\u09a4 \\u09b8\\u0982\\u09b8\\u09cd\\u0995\\u09be\\u09b0 \\u0995\\\
u09ae\\u09bf\\u099f\\u09bf\\u09b0 (\\u0995\\u09c7\\u09b0\\u09be\\u09b2\\u09be)\
\ \\u09b8\\u09a6\\u09b8\\u09cd\\u09af \\u099b\\u09bf\\u09b2\\u09c7\\u09a8\\u0964\
\ \\u09b8\\u09b0\\u0995\\u09be\\u09b0-\\u09a8\\u09c7\\u09a4\\u09c3\\u09a4\\u09cd\\\
u09ac\\u09be\\u09a7\\u09c0\\u09a8 \\u0995\\u09ae\\u09bf\\u099f\\u09bf\\u09b0 \\\
u09b8\\u09a6\\u09b8\\u09cd\\u09af \\u09a5\\u09be\\u0995\\u09be\\u09b0 \\u09aa\\\
u09be\\u09b6\\u09be\\u09aa\\u09be\\u09b6\\u09bf, 1960 \\u09b8\\u09be\\u09b2\\\
u09c7 \\u09a8\\u099f\\u09b0\\u09be\\u099c\\u0995\\u09c7 \\u098f\\u0995 \\u09ac\\\
u099b\\u09b0\\u09c7\\u09b0 \\u099c\\u09a8\\u09cd\\u09af \\u0989\\u099a\\u09cd\\\
u099a \\u09aa\\u09b0\\u09cd\\u09af\\u09be\\u09af\\u09bc\\u09c7\\u09b0 \\u09ac\\\
u09bf\\u0995\\u09cd\\u09b0\\u09af\\u09bc \\u0995\\u09b0 \\u09a4\\u09a6\\u09a8\\\
u09cd\\u09a4 \\u0995\\u09ae\\u09bf\\u099f\\u09bf\\u09b0 \\u09b8\\u09ad\\u09be\\\
u09aa\\u09a4\\u09bf \\u09a8\\u09bf\\u09af\\u09c1\\u0995\\u09cd\\u09a4 \\u0995\\\
u09b0\\u09be \\u09b9\\u09af\\u09bc\\u0964 1948 \\u09b8\\u09be\\u09b2\\u09c7, \\\
u09b8\\u09cd\\u09ac\\u09be\\u09a7\\u09c0\\u09a8\\u09a4\\u09be\\u09b0 \\u098f\\\
u0995 \\u09ac\\u099b\\u09b0 \\u09aa\\u09b0, \\u09a8\\u099f\\u09b0\\u09be\\u099c\
\ 1950 \\u09b8\\u09be\\u09b2 \\u09aa\\u09b0\\u09cd\\u09af\\u09a8\\u09cd\\u09a4\
\ \\u0997\\u09a3\\u09aa\\u09b0\\u09bf\\u09b7\\u09a6\\u09c7\\u09b0 \\u09b8\\u09a6\\\
u09b8\\u09cd\\u09af \\u09a8\\u09bf\\u09b0\\u09cd\\u09ac\\u09be\\u099a\\u09bf\\\
u09a4 \\u09b9\\u09a8; \\u09aa\\u09b0\\u09c7, \\u09a4\\u09bf\\u09a8\\u09bf 1951\
\ \\u09a5\\u09c7\\u0995\\u09c7 1957 \\u09b8\\u09be\\u09b2 \\u09aa\\u09b0\\u09cd\\\
u09af\\u09a8\\u09cd\\u09a4 \\u099f\\u09cd\\u09b0\\u09be\\u09ad\\u09be\\u0999\\\
u09cd\\u0995\\u09cb\\u09b0-\\u0995\\u09cb\\u099a\\u09bf\\u09a8\\u09c7\\u09b0 \\\
u09ac\\u09bf\\u09a7\\u09be\\u09a8\\u09b8\\u09ad\\u09be\\u09af\\u09bc \\u09a6\\\
u09be\\u09af\\u09bc\\u09bf\\u09a4\\u09cd\\u09ac \\u09aa\\u09be\\u09b2\\u09a8 \\\
u0995\\u09b0\\u09c7\\u09a8\\u0964 1962 \\u09b8\\u09be\\u09b2\\u09c7, \\u09a8\\\
u099f\\u09b0\\u09be\\u099c \\u09a4\\u09bf\\u09b0\\u09c1\\u09ac\\u09a8\\u09a8\\\
u09cd\\u09a4\\u09aa\\u09c1\\u09b0\\u09ae \\u09a8\\u09bf\\u09b0\\u09cd\\u09ac\\\
u09be\\u099a\\u09a8\\u09c0 \\u098f\\u09b2\\u09be\\u0995\\u09be (\\u09af\\u09be\
\ \\\"\\u09a4\\u09cd\\u09b0\\u09bf\\u09ac\\u09be\\u09a8\\u09cd\\u09a6\\u09cd\\\
u09b0\\u09ae\\\" \\u09a8\\u09be\\u09ae\\u09c7\\u0993 \\u09aa\\u09b0\\u09bf\\u099a\\\
u09bf\\u09a4) \\u09a5\\u09c7\\u0995\\u09c7 \\u09b8\\u0982\\u09b8\\u09a6\\u09c0\\\
u09af\\u09bc \\u09a8\\u09bf\\u09b0\\u09cd\\u09ac\\u09be\\u099a\\u09a8\\u09c7 \\\
u09aa\\u09cd\\u09b0\\u09a4\\u09bf\\u09a6\\u09cd\\u09ac\\u09a8\\u09cd\\u09a6\\\
u09cd\\u09ac\\u09bf\\u09a4\\u09be \\u0995\\u09b0\\u09c7\\u09a8 \\u098f\\u09ac\\\
u0982 \\u09b8\\u0982\\u09b8\\u09a6 \\u09b8\\u09a6\\u09b8\\u09cd\\u09af \\u09b9\\\
u09bf\\u09b8\\u09be\\u09ac\\u09c7 \\u09a8\\u09bf\\u09b0\\u09cd\\u09ac\\u09be\\\
u099a\\u09bf\\u09a4 \\u09b9\\u09a8\\u0964 \\u09ac\\u09cd\\u09af\\u0995\\u09cd\\\
u09a4\\u09bf\\u0997\\u09a4 \\u099c\\u09c0\\u09ac\\u09a8\\u09c7 \\u09a8\\u099f\\\
u09b0\\u09be\\u099c \\u0995\\u09cb\\u09ae\\u09b2\\u09ae\\u09c7\\u09b0 \\u09b8\\\
u09be\\u09a5\\u09c7 \\u09ac\\u09bf\\u09ac\\u09be\\u09b9 \\u09ac\\u09a8\\u09cd\\\
u09a7\\u09a8\\u09c7 \\u0986\\u09ac\\u09a6\\u09cd\\u09a7 \\u09b9\\u09af\\u09bc\\\
u09c7\\u099b\\u09bf\\u09b2\\u09c7\\u09a8 \\u098f\\u09ac\\u0982 \\u09a4\\u09be\\\
u0981\\u09b0 \\u09b8\\u09be\\u09a4 \\u0995\\u09a8\\u09cd\\u09af\\u09be \\u0993\
\ \\u09a4\\u09bf\\u09a8 \\u09aa\\u09c1\\u09a4\\u09cd\\u09b0 \\u099b\\u09bf\\u09b2\\\
u0964 \\u09a8\\u099f\\u09b0\\u09be\\u099c \\u099b\\u09bf\\u09b2\\u09c7\\u09a8\
\ \\u098f\\u0995\\u099c\\u09a8 \\u09ad\\u09be\\u09b0\\u09a4\\u09c0\\u09af\\u09bc\
\ \\u09aa\\u09a3\\u09cd\\u09a1\\u09bf\\u09a4 \\u09ae\\u09a8\\u09cb\\u09ae\\u09a8\\\
u09bf\\u09af\\u09bc\\u09be\\u09ae \\u09aa\\u09bf \\u09b8\\u09c1\\u09a8\\u09cd\\\
u09a6\\u09b0\\u09ae \\u09aa\\u09bf\\u09b2\\u09cd\\u09b2\\u09be\\u0987\\u09af\\\
u09bc\\u09c7\\u09b0 \\u09aa\\u09c1\\u09a4\\u09cd\\u09b0, \\u09af\\u09bf\\u09a8\\\
u09bf \\u09aa\\u09c7\\u09b6\\u09be\\u0997\\u09a4\\u09ad\\u09be\\u09ac\\u09c7 \\\
u0985\\u09a7\\u09cd\\u09af\\u09be\\u09aa\\u0995 \\u09ae\\u09a8\\u09cb\\u09ae\\\
u09a8\\u09bf\\u09af\\u09bc\\u09be\\u09ae \\u09b8\\u09c1\\u09a8\\u09cd\\u09a6\\\
u09b0\\u09ae \\u09aa\\u09bf\\u09b2\\u09cd\\u09b2\\u09be\\u0987 \\u09a8\\u09be\\\
u09ae\\u09c7 \\u09aa\\u09b0\\u09bf\\u099a\\u09bf\\u09a4, \\u09af\\u09bf\\u09a8\\\
u09bf \\u0995\\u09c7\\u09b0\\u09be\\u09b2\\u09be\\u09af\\u09bc \\u0985\\u09ac\\\
u09b8\\u09cd\\u09a5\\u09bf\\u09a4 \\u098f\\u0995\\u099f\\u09bf \\u0995\\u09b2\\\
u09c7\\u099c\\u09c7\\u09b0 \\u09b6\\u09bf\\u0995\\u09cd\\u09b7\\u0995 \\u099b\\\
u09bf\\u09b2\\u09c7\\u09a8\\u0964 \\u09ac\\u09bf\\u09a4\\u09b0\\u09cd\\u0995 2017\
\ \\u09b8\\u09be\\u09b2\\u09c7, \\u0995\\u09c7\\u09b0\\u09be\\u09b2\\u09be\\u09b0\
\ \\u09ae\\u09c1\\u0996\\u09cd\\u09af\\u09ae\\u09a8\\u09cd\\u09a4\\u09cd\\u09b0\\\
u09c0 \\u09aa\\u09bf\\u09a8\\u09be\\u09b0\\u09be\\u0987 \\u09ac\\u09bf\\u099c\\\
u09af\\u09bc\\u09a8 \\u09aa\\u09bf. \\u098f\\u09b8. \\u09a8\\u099f\\u09b0\\u09be\\\
u099c \\u09aa\\u09bf\\u09b2\\u09cd\\u09b2\\u09be\\u0987 \\u09b8\\u09ae\\u09cd\\\
u09aa\\u09b0\\u09cd\\u0995\\u09c7 \\u0986\\u09aa\\u09a4\\u09cd\\u09a4\\u09bf\\\
u0995\\u09b0 \\u09ae\\u09a8\\u09cd\\u09a4\\u09ac\\u09cd\\u09af \\u0995\\u09b0\\\
u09be\\u09b0 \\u09a6\\u09be\\u09ac\\u09bf \\u0995\\u09b0\\u09c7\\u099b\\u09bf\\\
u09b2\\u09c7\\u09a8\\u0964 \\u09ae\\u09a8\\u09cd\\u09a4\\u09cd\\u09b0\\u09c0 \\\
u09aa\\u09b0\\u09c7 \\u098f\\u0987 \\u09ae\\u09a8\\u09cd\\u09a4\\u09ac\\u09cd\\\
u09af \\u09b8\\u09cd\\u09ac\\u09c0\\u0995\\u09be\\u09b0 \\u0995\\u09b0\\u09c7\
\ \\u0989\\u09b2\\u09cd\\u09b2\\u09c7\\u0996 \\u0995\\u09b0\\u09c7\\u09a8 \\u09af\\\
u09c7 \\u09a4\\u09bf\\u09a8\\u09bf \\u0995\\u0996\\u09a8\\u0987 \\u0995\\u09cb\\\
u09a8\\u0993 \\u09a8\\u09bf\\u09b0\\u09cd\\u09a6\\u09bf\\u09b7\\u09cd\\u099f \\\
u09ac\\u09cd\\u09af\\u0995\\u09cd\\u09a4\\u09bf\\u0995\\u09c7 \\u0985\\u09ac\\\
u099c\\u09cd\\u099e\\u09be \\u0995\\u09b0\\u09a4\\u09c7 \\u099a\\u09be\\u09a8\\\
u09a8\\u09bf, \\u0995\\u09c7\\u09ac\\u09b2 \\u0995\\u09af\\u09bc\\u09c7\\u0995\\\
u099f\\u09bf \\u09aa\\u09bf\\u09b2\\u09cd\\u09b2\\u09be\\u0987 \\u09aa\\u09b0\\\
u09bf\\u09ac\\u09be\\u09b0\\u0995\\u09c7 \\u0985\\u09ac\\u099c\\u09cd\\u099e\\\
u09be \\u0995\\u09b0\\u09a4\\u09c7 \\u099a\\u09c7\\u09af\\u09bc\\u09c7\\u099b\\\
u09bf\\u09b2\\u09c7\\u09a8\\u0964 \\u09ac\\u09bf\\u099c\\u09af\\u09bc\\u09a8 \\\
u098f\\u0987 \\u09ac\\u09b2\\u09c7 \\u09ac\\u09bf\\u09a4\\u09b0\\u09cd\\u0995\\\
u0997\\u09c1\\u09b2\\u09bf \\u09b8\\u09cd\\u09aa\\u09b7\\u09cd\\u099f \\u0995\\\
u09b0\\u09c7 \\u09a6\\u09bf\\u09af\\u09bc\\u09c7\\u099b\\u09bf\\u09b2\\u09c7\\\
u09a8 \\u09af\\u09c7 \\u09a4\\u09bf\\u09a8\\u09bf \\u09a8\\u099f\\u09b0\\u09be\\\
u099c\\u09c7\\u09b0 \\u09a8\\u09be\\u09ae\\u0993 \\u09ae\\u09a8\\u09c7 \\u09b0\\\
u09be\\u0996\\u09c7\\u09a8\\u09a8\\u09bf\\u0964 \\u09ac\\u09bf\\u09b0\\u09cb\\\
u09a7\\u09c0 \\u09a6\\u09b2 \\u09ad\\u09be\\u09b0\\u09a4\\u09c0\\u09af\\u09bc\
\ \\u099c\\u09be\\u09a4\\u09c0\\u09af\\u09bc \\u0995\\u0982\\u0997\\u09cd\\u09b0\\\
u09c7\\u09b8 \\u0995\\u09c7\\u09b0\\u09be\\u09b2\\u09be\\u09b0 \\u09ae\\u09c1\\\
u0996\\u09cd\\u09af\\u09ae\\u09a8\\u09cd\\u09a4\\u09cd\\u09b0\\u09c0\\u09b0 \\\
u09aa\\u09a6\\u09a4\\u09cd\\u09af\\u09be\\u0997\\u09c7\\u09b0 \\u09a6\\u09be\\\
u09ac\\u09bf \\u099c\\u09be\\u09a8\\u09bf\\u09af\\u09bc\\u09c7\\u099b\\u09bf\\\
u09b2, \\u09af\\u09be \\u09a4\\u09be\\u0981\\u0995\\u09c7 \\u09ac\\u09cd\\u09af\\\
u09be\\u09aa\\u0995 \\u09ac\\u09bf\\u09a4\\u09b0\\u09cd\\u0995\\u09c7 \\u09ab\\\
u09c7\\u09b2\\u09c7\\u099b\\u09bf\\u09b2\\u0964 \\u0964\",\n \"sents\": 16,\n\
\ \"chars\": 2057,\n \"words\": 300,\n \"tokens\": 835\n },\n\
\ \"truncated_cells\": []\n },\n {\n \"row_idx\": 1,\n \"row\": {\n\
\ \"id\": 63048490,\n \"url\": \"https://en.wikipedia.org/wiki/Ganga%20Singh%20College\"\
,\n \"title\": \"Ganga Singh College\",\n \"text\": \"\\u0997\\u0999\\\
u09cd\\u0997\\u09be \\u09b8\\u09bf\\u0982 \\u0995\\u09b2\\u09c7\\u099c \\u09ac\\\
u09bf\\u09b9\\u09be\\u09b0\\u09c7\\u09b0 \\u099b\\u09aa\\u09b0\\u09be\\u09af\\\
u09bc \\u0985\\u09ac\\u09b8\\u09cd\\u09a5\\u09bf\\u09a4 \\u098f\\u0995\\u099f\\\
u09bf \\u09a1\\u09bf\\u0997\\u09cd\\u09b0\\u09bf \\u0995\\u09b2\\u09c7\\u099c\\\
u0964 \\u098f\\u099f\\u09bf \\u099c\\u09af\\u09bc \\u09aa\\u09cd\\u09b0\\u0995\\\
u09be\\u09b6 \\u09ac\\u09bf\\u09b6\\u09cd\\u09ac\\u09ac\\u09bf\\u09a6\\u09cd\\\
u09af\\u09be\\u09b2\\u09af\\u09bc\\u09c7\\u09b0 \\u098f\\u0995\\u099f\\u09bf \\\
u0985\\u0982\\u09b6\\u0964 \\u0995\\u09b2\\u09c7\\u099c\\u099f\\u09bf \\u0995\\\
u09b2\\u09be \\u0993 \\u09ac\\u09bf\\u099c\\u09cd\\u099e\\u09be\\u09a8\\u09c7\
\ \\u0987\\u09a8\\u09cd\\u099f\\u09be\\u09b0\\u09ae\\u09bf\\u09a1\\u09bf\\u09af\\\
u09bc\\u09c7\\u099f \\u098f\\u09ac\\u0982 \\u09a4\\u09bf\\u09a8 \\u09ac\\u099b\\\
u09b0\\u09c7\\u09b0 \\u09a1\\u09bf\\u0997\\u09cd\\u09b0\\u09bf \\u0995\\u09cb\\\
u09b0\\u09cd\\u09b8 (\\u099f\\u09bf\\u09a1\\u09bf\\u09b8\\u09bf) \\u09aa\\u09cd\\\
u09b0\\u09a6\\u09be\\u09a8 \\u0995\\u09b0\\u09c7\\u0964 1966 \\u09b8\\u09be\\\
u09b2\\u09c7 \\u0987\\u09a4\\u09bf\\u09b9\\u09be\\u09b8 \\u0995\\u09b2\\u09c7\\\
u099c \\u09aa\\u09cd\\u09b0\\u09a4\\u09bf\\u09b7\\u09cd\\u09a0\\u09bf\\u09a4 \\\
u09b9\\u09af\\u09bc\\u0964 \\u09ac\\u09bf\\u09ad\\u09be\\u0997 \\u0995\\u09b2\\\
u09be \\u09b9\\u09bf\\u09a8\\u09cd\\u09a6\\u09bf \\u0989\\u09b0\\u09cd\\u09a6\\\
u09c1 \\u09a6\\u09b0\\u09cd\\u09b6\\u09a8 \\u0985\\u09b0\\u09cd\\u09a5\\u09a8\\\
u09c0\\u09a4\\u09bf \\u0987\\u0982\\u09b0\\u09c7\\u099c\\u09bf \\u09b0\\u09be\\\
u09b7\\u09cd\\u099f\\u09cd\\u09b0\\u09ac\\u09bf\\u099c\\u09cd\\u099e\\u09be\\\
u09a8 \\u0987\\u09a4\\u09bf\\u09b9\\u09be\\u09b8 \\u09ae\\u09a8\\u09cb\\u09ac\\\
u09bf\\u099c\\u09cd\\u099e\\u09be\\u09a8 \\u09ac\\u09bf\\u099c\\u09cd\\u099e\\\
u09be\\u09a8 \\u0997\\u09a3\\u09bf\\u09a4 \\u09aa\\u09a6\\u09be\\u09b0\\u09cd\\\
u09a5\\u09ac\\u09bf\\u099c\\u09cd\\u099e\\u09be\\u09a8 \\u09b0\\u09b8\\u09be\\\
u09af\\u09bc\\u09a8 \\u09aa\\u09cd\\u09b0\\u09be\\u09a3\\u09bf\\u09ac\\u09bf\\\
u09a6\\u09cd\\u09af\\u09be\",\n \"sents\": 5,\n \"chars\": 345,\n \
\ \"words\": 49,\n \"tokens\": 143\n },\n \"truncated_cells\": []\n\
\ }\n]"
- "DATASET_NAME: \"community-datasets/wiki_snippets\"\nFEATURES: {'_id': {'dtype':\
\ 'string', '_type': 'Value'}, 'datasets_id': {'dtype': 'int32', '_type': 'Value'},\
\ 'wiki_id': {'dtype': 'string', '_type': 'Value'}, 'start_paragraph': {'dtype':\
\ 'int32', '_type': 'Value'}, 'start_character': {'dtype': 'int32', '_type': 'Value'},\
\ 'end_paragraph': {'dtype': 'int32', '_type': 'Value'}, 'end_character': {'dtype':\
\ 'int32', '_type': 'Value'}, 'article_title': {'dtype': 'string', '_type': 'Value'},\
\ 'section_title': {'dtype': 'string', '_type': 'Value'}, 'passage_text': {'dtype':\
\ 'string', '_type': 'Value'}}\nDATA SAMPLE:\n[\n {\n \"row_idx\": 0,\n \
\ \"row\": {\n \"_id\": \"{\\\"datasets_id\\\": 0, \\\"wiki_id\\\": \\\"\
Q1294448\\\", \\\"sp\\\": 2, \\\"sc\\\": 0, \\\"ep\\\": 6, \\\"ec\\\": 610}\"\
,\n \"datasets_id\": 0,\n \"wiki_id\": \"Q1294448\",\n \"start_paragraph\"\
: 2,\n \"start_character\": 0,\n \"end_paragraph\": 6,\n \"end_character\"\
: 610,\n \"article_title\": \"\\u00c1gi Szal\\u00f3ki\",\n \"section_title\"\
: \"Life\",\n \"passage_text\": \"\\u00c1gi Szal\\u00f3ki Life She started\
\ singing as a toddler, considering M\\u00e1rta Sebesty\\u00e9n a role model.\
\ Her musical background is traditional folk music; she first won recognition\
\ for singing with \\u00d6kr\\u00f6s in a traditional folk style, and Besh o droM,\
\ a Balkan gypsy brass band. With these ensembles she toured around the world\
\ from the Montreal Jazz Festival, through Glastonbury Festival to the Th\\u00e9atre\
\ de la Ville in Paris, from New York to Beijing.\\nSince 2005, she began to pursue\
\ her solo career and explore various genres, such as jazz, thirties ballads,\
\ or children's songs.\\nUntil now, three of her six released albums\"\n },\n\
\ \"truncated_cells\": []\n },\n {\n \"row_idx\": 1,\n \"row\": {\n\
\ \"_id\": \"{\\\"datasets_id\\\": 0, \\\"wiki_id\\\": \\\"Q1294448\\\",\
\ \\\"sp\\\": 6, \\\"sc\\\": 610, \\\"ep\\\": 6, \\\"ec\\\": 1198}\",\n \"\
datasets_id\": 0,\n \"wiki_id\": \"Q1294448\",\n \"start_paragraph\"\
: 6,\n \"start_character\": 610,\n \"end_paragraph\": 6,\n \"end_character\"\
: 1198,\n \"article_title\": \"\\u00c1gi Szal\\u00f3ki\",\n \"section_title\"\
: \"Life\",\n \"passage_text\": \"were awarded Album of the Year Prize (MAHASZ\
\ Fonogram Award) in the children's records and jazz categories.\\n\\\"When I\
\ sing it's not important whether it's jazz or traditional music or music of any\
\ other kind\\\", Szal\\u00f3ki has said. \\\"I feel as if I'm swimming in the\
\ music. The music is like water; or air and I fly in it.\\\"\\nShe is trained\
\ in the Kod\\u00e1ly method. Many of her projects are inspired by poetry, be\
\ it \\\"Sunset of Rust, It is Dusk\\\" on Lament (2005), or the Kar\\u00e1dy\
\ album of ballads (A V\\u00e1gy Muzsik\\u00e1l, 2008).\\nShe has also performed\
\ at numerous children's concerts.\"\n },\n \"truncated_cells\": []\n }\n\
]"
- "DATASET_NAME: \"Shiveswarran/llm_instruction_code_V6.1\"\nFEATURES: {'Unnamed:\
\ 0': {'dtype': 'int64', '_type': 'Value'}, 'instruction': {'dtype': 'string',\
\ '_type': 'Value'}, 'code': {'dtype': 'string', '_type': 'Value'}}\nDATA SAMPLE:\n\
[\n {\n \"row_idx\": 0,\n \"row\": {\n \"Unnamed: 0\": 0,\n \"\
instruction\": \"Write me Python code is a program to read, parse, and display\
\ thermal data from the Topdon TC001 Thermal camera. It includes detailed information\
\ about the program, such as its author, date of creation, and purpose. The code\
\ also includes a list of key bindings for controlling the program, such as increasing/decreasing\
\ blur, changing the interpolated scale, and toggling the HUD. The code also includes\
\ a function to detect if the program is running on a Raspberry Pi, as OpenCV\
\ behaves differently on different builds. The code then initializes the video\
\ capture device and sets the appropriate properties, such as not converting the\
\ video to RGB to preserve the temperature data. The code then sets the initial\
\ values for various parameters, such as the window size, scale, and contrast.\
\ It also creates a named window for displaying the thermal data and sets up the\
\ font and color map for the display. The main loop of the program then begins,\
\ which includes code for reading the frames from the video capture device, parsing\
\ the thermal data, and displaying the data on the screen. The code also includes\
\ code for handling key presses, such as changing the blur radius, contrast, and\
\ color map, and for recording and taking snapshots of the thermal data.\",\n\
\ \"code\": \"#!/usr/bin/env python3\\\\n'''\\\\nLes Wright 21 June 2023\\\
\\nhttps://youtube.com/leslaboratory\\\\nA Python program to read, parse and display\
\ thermal data from the Topdon TC001 Thermal camera!\\\\n'''\\\\nprint('Les Wright\
\ 21 June 2023')\\\\nprint('https://youtube.com/leslaboratory')\\\\nprint('A Python\
\ program to read, parse and display thermal data from the Topdon TC001 Thermal\
\ camera!')\\\\nprint('')\\\\nprint('Tested on Debian all features are working\
\ correctly')\\\\nprint('This will work on the Pi However a number of workarounds\
\ are implemented!')\\\\nprint('Seemingly there are bugs in the compiled version\
\ of cv2 that ships with the Pi!')\\\\nprint('')\\\\nprint('Key Bindings:')\\\\\
nprint('')\\\\nprint('a z: Increase/Decrease Blur')\\\\nprint('s x: Floating High\
\ and Low Temp Label Threshold')\\\\nprint('d c: Change Interpolated scale Note:\
\ This will not change the window size on the Pi')\\\\nprint('f v: Contrast')\\\
\\nprint('q w: Fullscreen Windowed (note going back to windowed does not seem\
\ to work on the Pi!)')\\\\nprint('r t: Record and Stop')\\\\nprint('p : Snapshot')\\\
\\nprint('m : Cycle through ColorMaps')\\\\nprint('h : Toggle HUD')\\\\n\\\\nimport\
\ cv2\\\\nimport numpy as np\\\\nimport argparse\\\\nimport time\\\\nimport io\\\
\\n\\\\n#We need to know if we are running on the Pi, because openCV behaves a\
\ little oddly on all the builds!\\\\n#https://raspberrypi.stackexchange.com/questions/5100/detect-that-a-python-program-is-running-on-the-pi\\\
\\ndef is_raspberrypi():\\\\n try:\\\\n with io.open('/sys/firmware/devicetree/base/model',\
\ 'r') as m:\\\\n if 'raspberry pi' in m.read().lower(): return True\\\
\\n except Exception: pass\\\\n return False\\\\n\\\\nisPi = is_raspberrypi()\\\
\\n\\\\nparser = argparse.ArgumentParser()\\\\nparser.add_argument(\\\\\\\"--device\\\
\\\\\", type=int, default=0, help=\\\\\\\"Video Device number e.g. 0, use v4l2-ctl\
\ --list-devices\\\\\\\")\\\\nargs = parser.parse_args()\\\\n\\\\t\\\\nif args.device:\\\
\\n\\\\tdev = args.device\\\\nelse:\\\\n\\\\tdev = 0\\\\n\\\\t\\\\n#init video\\\
\\ncap = cv2.VideoCapture('/dev/video'+str(dev), cv2.CAP_V4L)\\\\n#cap = cv2.VideoCapture(0)\\\
\\n#pull in the video but do NOT automatically convert to RGB, else it breaks\
\ the temperature data!\\\\n#https://stackoverflow.com/questions/63108721/opencv-setting-videocap-property-to-cap-prop-convert-rgb-generates-weird-boolean\\\
\\nif isPi == True:\\\\n\\\\tcap.set(cv2.CAP_PROP_CONVERT_RGB, 0.0)\\\\nelse:\\\
\\n\\\\tcap.set(cv2.CAP_PROP_CONVERT_RGB, False)\\\\n\\\\n#256x192 General settings\\\
\\nwidth = 256 #Sensor width\\\\nheight = 192 #sensor height\\\\nscale = 3 #scale\
\ multiplier\\\\nnewWidth = width*scale \\\\nnewHeight = height*scale\\\\nalpha\
\ = 1.0 # Contrast control (1.0-3.0)\\\\ncolormap = 0\\\\nfont=cv2.FONT_HERSHEY_SIMPLEX\\\
\\ndispFullscreen = False\\\\ncv2.namedWindow('Thermal',cv2.WINDOW_GUI_NORMAL)\\\
\\ncv2.resizeWindow('Thermal', newWidth,newHeight)\\\\nrad = 0 #blur radius\\\\\
nthreshold = 2\\\\nhud = True\\\\nrecording = False\\\\nelapsed = \\\\\\\"00:00:00\\\
\\\\\"\\\\nsnaptime = \\\\\\\"None\\\\\\\"\\\\n\\\\n\"\n },\n \"truncated_cells\"\
: []\n },\n {\n \"row_idx\": 1,\n \"row\": {\n \"Unnamed: 0\": 1,\n\
\ \"instruction\": \"Write me Python code defines a function called `rec()`\
\ that creates a video writer object. The video writer object is used to write\
\ video frames to a file. The function takes no arguments and returns a video\
\ writer object. The first line of the function gets the current time in the format\
\ `YYYYMMDD--HHMMSS`. This time is used to create a unique file name for the output\
\ video file. The second line of the function creates a video writer object. The\
\ video writer object is created using the `cv2.VideoWriter()` function. The `cv2.VideoWriter()`\
\ function takes four arguments: * The first argument is the name of the output\
\ video file. * The second argument is the fourcc code of the video codec to be\
\ used. The fourcc code is a four-character code that identifies the video codec.\
\ In this case, the `XVID` codec is used. * The third argument is the frame rate\
\ of the output video. The frame rate is the number of frames per second that\
\ the video will be played back at. In this case, the frame rate is set to 25\
\ frames per second. * The fourth argument is the size of the output video frames.\
\ The size of the output video frames is specified as a tuple of two integers.\
\ The first integer is the width of the output video frames, and the second integer\
\ is the height of the output video frames. In this case, the output video frames\
\ will be 640 pixels wide and 480 pixels high. The third line of the function\
\ returns the video writer object.\",\n \"code\": \"def rec():\\\\n\\\\tnow\
\ = time.strftime(\\\\\\\"%Y%m%d--%H%M%S\\\\\\\")\\\\n\\\\t#do NOT use mp4 here,\
\ it is flakey!\\\\n\\\\tvideoOut = cv2.VideoWriter(now+'output.avi', cv2.VideoWriter_fourcc(*'XVID'),25,\
\ (newWidth,newHeight))\\\\n\\\\treturn(videoOut)\\\\n\\\\n\"\n },\n \"\
truncated_cells\": []\n }\n]"
- source_sentence: optimum library dependents dataset
sentences:
- "DATASET_NAME: \"open-source-metrics/optimum-dependents\"\nFEATURES: {'name':\
\ {'dtype': 'string', '_type': 'Value'}, 'stars': {'dtype': 'int64', '_type':\
\ 'Value'}, 'forks': {'dtype': 'int64', '_type': 'Value'}}\nDATA SAMPLE:\n[\n\
\ {\n \"row_idx\": 0,\n \"row\": {\n \"name\": \"deepchecks/deepchecks\"\
,\n \"stars\": 2550,\n \"forks\": 183\n },\n \"truncated_cells\"\
: []\n },\n {\n \"row_idx\": 1,\n \"row\": {\n \"name\": \"huggingface/optimum-neuron\"\
,\n \"stars\": 33,\n \"forks\": 2\n },\n \"truncated_cells\":\
\ []\n }\n]"
- "DATASET_NAME: \"ngocuong/Ghepmat\"\nFEATURES: {'text': {'dtype': 'string', '_type':\
\ 'Value'}}\nDATA SAMPLE:\n[\n {\n \"row_idx\": 0,\n \"row\": {\n \
\ \"text\": \"The model file required is \\\"inswapper_128.onnx\\\".Mirrors are\
\ given the roop project [installation guide](https://github.com/s0md3v/roop/wiki/1.-Installation).\"\
\n },\n \"truncated_cells\": []\n },\n {\n \"row_idx\": 1,\n \"\
row\": {\n \"text\": \"insightface==0.7.3\"\n },\n \"truncated_cells\"\
: []\n }\n]"
- "DATASET_NAME: \"wangrongsheng/GenMedGPT-5k-en\"\nFEATURES: {'input': {'dtype':\
\ 'string', '_type': 'Value'}, 'instruction': {'dtype': 'string', '_type': 'Value'},\
\ 'output': {'dtype': 'string', '_type': 'Value'}}\nDATA SAMPLE:\n[\n {\n \
\ \"row_idx\": 0,\n \"row\": {\n \"input\": \"Doctor, I have been experiencing\
\ sudden and frequent panic attacks. I don't know what to do.\",\n \"instruction\"\
: \"If you are a doctor, please answer the medical questions based on the patient's\
\ description.\",\n \"output\": \"Well, based on what you're telling me,\
\ it sounds like you may be suffering from panic disorder. The best course of\
\ action is to start with psychotherapy and mental health counseling. Additionally,\
\ we should conduct an electrocardiogram to make sure that there are no physical\
\ issues causing your panic attacks. We will also need to perform a depression\
\ screen and a toxicology screen to rule out any other underlying causes. Finally,\
\ I would recommend a comprehensive psychological and psychiatric evaluation and\
\ therapy to help manage your symptoms.\"\n },\n \"truncated_cells\": []\n\
\ },\n {\n \"row_idx\": 1,\n \"row\": {\n \"input\": \"Doctor, I\
\ have been experiencing a hoarse voice for a few weeks now and it's not getting\
\ any better despite taking medication. What could be the problem?\",\n \"\
instruction\": \"If you are a doctor, please answer the medical questions based\
\ on the patient's description.\",\n \"output\": \"It's possible that you\
\ have a vocal cord polyp. To confirm this, we need to conduct tracheoscopy and\
\ laryngoscopy with a biopsy. We may also need to conduct an occupational therapy\
\ assessment such as speech therapy, other diagnostic procedures like an interview\
\ or consultation, physical therapy exercises, as well as diagnostic procedures\
\ on your nose, mouth, and pharynx to rule out any other underlying conditions.\
\ Additionally, we may need to evaluate your eyes and ears for any related conditions,\
\ such as ophthalmologic and otologic diagnosis and treatment.\"\n },\n \
\ \"truncated_cells\": []\n }\n]"
---
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased). It maps sentences & paragraphs to a 128-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 179 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: BertModel
(1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("contrastive-bert-base-uncased")
# Run inference
sentences = [
'optimum library dependents dataset',
'DATASET_NAME: "open-source-metrics/optimum-dependents"\nFEATURES: {\'name\': {\'dtype\': \'string\', \'_type\': \'Value\'}, \'stars\': {\'dtype\': \'int64\', \'_type\': \'Value\'}, \'forks\': {\'dtype\': \'int64\', \'_type\': \'Value\'}}\nDATA SAMPLE:\n[\n {\n "row_idx": 0,\n "row": {\n "name": "deepchecks/deepchecks",\n "stars": 2550,\n "forks": 183\n },\n "truncated_cells": []\n },\n {\n "row_idx": 1,\n "row": {\n "name": "huggingface/optimum-neuron",\n "stars": 33,\n "forks": 2\n },\n "truncated_cells": []\n }\n]',
'DATASET_NAME: "ngocuong/Ghepmat"\nFEATURES: {\'text\': {\'dtype\': \'string\', \'_type\': \'Value\'}}\nDATA SAMPLE:\n[\n {\n "row_idx": 0,\n "row": {\n "text": "The model file required is \\"inswapper_128.onnx\\".Mirrors are given the roop project [installation guide](https://github.com/s0md3v/roop/wiki/1.-Installation)."\n },\n "truncated_cells": []\n },\n {\n "row_idx": 1,\n "row": {\n "text": "insightface==0.7.3"\n },\n "truncated_cells": []\n }\n]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 128]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 3e-06
- `num_train_epochs`: 10
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"TRANSLATION"
] | [
"MEDAL"
] | Non_BioNLP |
khoa-klaytn/bge-small-en-v1.5-angle | khoa-klaytn | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,704 | 1,704 | 29 | 4 | ---
language:
- en
license: mit
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-small-en-v1.5-angle
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.79104477611939
- type: ap
value: 37.21923821573361
- type: f1
value: 68.0914945617093
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.75377499999999
- type: ap
value: 89.46766124546022
- type: f1
value: 92.73884001331487
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.986
- type: f1
value: 46.55936786727896
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.846000000000004
- type: map_at_10
value: 51.388
- type: map_at_100
value: 52.132999999999996
- type: map_at_1000
value: 52.141000000000005
- type: map_at_3
value: 47.037
- type: map_at_5
value: 49.579
- type: mrr_at_1
value: 36.558
- type: mrr_at_10
value: 51.658
- type: mrr_at_100
value: 52.402
- type: mrr_at_1000
value: 52.410000000000004
- type: mrr_at_3
value: 47.345
- type: mrr_at_5
value: 49.797999999999995
- type: ndcg_at_1
value: 35.846000000000004
- type: ndcg_at_10
value: 59.550000000000004
- type: ndcg_at_100
value: 62.596
- type: ndcg_at_1000
value: 62.759
- type: ndcg_at_3
value: 50.666999999999994
- type: ndcg_at_5
value: 55.228
- type: precision_at_1
value: 35.846000000000004
- type: precision_at_10
value: 8.542
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.389
- type: precision_at_5
value: 14.438
- type: recall_at_1
value: 35.846000000000004
- type: recall_at_10
value: 85.42
- type: recall_at_100
value: 98.43499999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 61.166
- type: recall_at_5
value: 72.191
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.402770198163594
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.01545436974177
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.586465273207196
- type: mrr
value: 74.42169019038825
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.1891186537969
- type: cos_sim_spearman
value: 83.75492046087288
- type: euclidean_pearson
value: 84.11766204805357
- type: euclidean_spearman
value: 84.01456493126516
- type: manhattan_pearson
value: 84.2132950502772
- type: manhattan_spearman
value: 83.89227298813377
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.74025974025975
- type: f1
value: 85.71493566466381
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.467181385006434
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.719496037339056
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.587000000000003
- type: map_at_10
value: 41.114
- type: map_at_100
value: 42.532
- type: map_at_1000
value: 42.661
- type: map_at_3
value: 37.483
- type: map_at_5
value: 39.652
- type: mrr_at_1
value: 36.338
- type: mrr_at_10
value: 46.763
- type: mrr_at_100
value: 47.393
- type: mrr_at_1000
value: 47.445
- type: mrr_at_3
value: 43.538
- type: mrr_at_5
value: 45.556000000000004
- type: ndcg_at_1
value: 36.338
- type: ndcg_at_10
value: 47.658
- type: ndcg_at_100
value: 52.824000000000005
- type: ndcg_at_1000
value: 54.913999999999994
- type: ndcg_at_3
value: 41.989
- type: ndcg_at_5
value: 44.944
- type: precision_at_1
value: 36.338
- type: precision_at_10
value: 9.156
- type: precision_at_100
value: 1.4789999999999999
- type: precision_at_1000
value: 0.196
- type: precision_at_3
value: 20.076
- type: precision_at_5
value: 14.85
- type: recall_at_1
value: 29.587000000000003
- type: recall_at_10
value: 60.746
- type: recall_at_100
value: 82.157
- type: recall_at_1000
value: 95.645
- type: recall_at_3
value: 44.821
- type: recall_at_5
value: 52.819
- type: map_at_1
value: 30.239
- type: map_at_10
value: 39.989000000000004
- type: map_at_100
value: 41.196
- type: map_at_1000
value: 41.325
- type: map_at_3
value: 37.261
- type: map_at_5
value: 38.833
- type: mrr_at_1
value: 37.516
- type: mrr_at_10
value: 46.177
- type: mrr_at_100
value: 46.806
- type: mrr_at_1000
value: 46.849000000000004
- type: mrr_at_3
value: 44.002
- type: mrr_at_5
value: 45.34
- type: ndcg_at_1
value: 37.516
- type: ndcg_at_10
value: 45.586
- type: ndcg_at_100
value: 49.897000000000006
- type: ndcg_at_1000
value: 51.955
- type: ndcg_at_3
value: 41.684
- type: ndcg_at_5
value: 43.617
- type: precision_at_1
value: 37.516
- type: precision_at_10
value: 8.522
- type: precision_at_100
value: 1.374
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 20.105999999999998
- type: precision_at_5
value: 14.152999999999999
- type: recall_at_1
value: 30.239
- type: recall_at_10
value: 55.03
- type: recall_at_100
value: 73.375
- type: recall_at_1000
value: 86.29599999999999
- type: recall_at_3
value: 43.269000000000005
- type: recall_at_5
value: 48.878
- type: map_at_1
value: 38.338
- type: map_at_10
value: 50.468999999999994
- type: map_at_100
value: 51.553000000000004
- type: map_at_1000
value: 51.608
- type: map_at_3
value: 47.107
- type: map_at_5
value: 49.101
- type: mrr_at_1
value: 44.201
- type: mrr_at_10
value: 54.057
- type: mrr_at_100
value: 54.764
- type: mrr_at_1000
value: 54.791000000000004
- type: mrr_at_3
value: 51.56699999999999
- type: mrr_at_5
value: 53.05
- type: ndcg_at_1
value: 44.201
- type: ndcg_at_10
value: 56.379000000000005
- type: ndcg_at_100
value: 60.645
- type: ndcg_at_1000
value: 61.73499999999999
- type: ndcg_at_3
value: 50.726000000000006
- type: ndcg_at_5
value: 53.58500000000001
- type: precision_at_1
value: 44.201
- type: precision_at_10
value: 9.141
- type: precision_at_100
value: 1.216
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.654
- type: precision_at_5
value: 15.723999999999998
- type: recall_at_1
value: 38.338
- type: recall_at_10
value: 70.30499999999999
- type: recall_at_100
value: 88.77199999999999
- type: recall_at_1000
value: 96.49799999999999
- type: recall_at_3
value: 55.218
- type: recall_at_5
value: 62.104000000000006
- type: map_at_1
value: 25.682
- type: map_at_10
value: 33.498
- type: map_at_100
value: 34.461000000000006
- type: map_at_1000
value: 34.544000000000004
- type: map_at_3
value: 30.503999999999998
- type: map_at_5
value: 32.216
- type: mrr_at_1
value: 27.683999999999997
- type: mrr_at_10
value: 35.467999999999996
- type: mrr_at_100
value: 36.32
- type: mrr_at_1000
value: 36.386
- type: mrr_at_3
value: 32.618
- type: mrr_at_5
value: 34.262
- type: ndcg_at_1
value: 27.683999999999997
- type: ndcg_at_10
value: 38.378
- type: ndcg_at_100
value: 43.288
- type: ndcg_at_1000
value: 45.413
- type: ndcg_at_3
value: 32.586
- type: ndcg_at_5
value: 35.499
- type: precision_at_1
value: 27.683999999999997
- type: precision_at_10
value: 5.864
- type: precision_at_100
value: 0.882
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 13.446
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 25.682
- type: recall_at_10
value: 51.712
- type: recall_at_100
value: 74.446
- type: recall_at_1000
value: 90.472
- type: recall_at_3
value: 36.236000000000004
- type: recall_at_5
value: 43.234
- type: map_at_1
value: 16.073999999999998
- type: map_at_10
value: 24.352999999999998
- type: map_at_100
value: 25.438
- type: map_at_1000
value: 25.545
- type: map_at_3
value: 21.614
- type: map_at_5
value: 23.104
- type: mrr_at_1
value: 19.776
- type: mrr_at_10
value: 28.837000000000003
- type: mrr_at_100
value: 29.755
- type: mrr_at_1000
value: 29.817
- type: mrr_at_3
value: 26.201999999999998
- type: mrr_at_5
value: 27.714
- type: ndcg_at_1
value: 19.776
- type: ndcg_at_10
value: 29.701
- type: ndcg_at_100
value: 35.307
- type: ndcg_at_1000
value: 37.942
- type: ndcg_at_3
value: 24.764
- type: ndcg_at_5
value: 27.025
- type: precision_at_1
value: 19.776
- type: precision_at_10
value: 5.659
- type: precision_at_100
value: 0.971
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 12.065
- type: precision_at_5
value: 8.905000000000001
- type: recall_at_1
value: 16.073999999999998
- type: recall_at_10
value: 41.647
- type: recall_at_100
value: 66.884
- type: recall_at_1000
value: 85.91499999999999
- type: recall_at_3
value: 27.916
- type: recall_at_5
value: 33.729
- type: map_at_1
value: 28.444999999999997
- type: map_at_10
value: 38.218999999999994
- type: map_at_100
value: 39.595
- type: map_at_1000
value: 39.709
- type: map_at_3
value: 35.586
- type: map_at_5
value: 36.895
- type: mrr_at_1
value: 34.841
- type: mrr_at_10
value: 44.106
- type: mrr_at_100
value: 44.98
- type: mrr_at_1000
value: 45.03
- type: mrr_at_3
value: 41.979
- type: mrr_at_5
value: 43.047999999999995
- type: ndcg_at_1
value: 34.841
- type: ndcg_at_10
value: 43.922
- type: ndcg_at_100
value: 49.504999999999995
- type: ndcg_at_1000
value: 51.675000000000004
- type: ndcg_at_3
value: 39.858
- type: ndcg_at_5
value: 41.408
- type: precision_at_1
value: 34.841
- type: precision_at_10
value: 7.872999999999999
- type: precision_at_100
value: 1.2449999999999999
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 18.993
- type: precision_at_5
value: 13.032
- type: recall_at_1
value: 28.444999999999997
- type: recall_at_10
value: 54.984
- type: recall_at_100
value: 78.342
- type: recall_at_1000
value: 92.77
- type: recall_at_3
value: 42.842999999999996
- type: recall_at_5
value: 47.247
- type: map_at_1
value: 23.072
- type: map_at_10
value: 32.354
- type: map_at_100
value: 33.800000000000004
- type: map_at_1000
value: 33.908
- type: map_at_3
value: 29.232000000000003
- type: map_at_5
value: 31.049
- type: mrr_at_1
value: 29.110000000000003
- type: mrr_at_10
value: 38.03
- type: mrr_at_100
value: 39.032
- type: mrr_at_1000
value: 39.086999999999996
- type: mrr_at_3
value: 35.407
- type: mrr_at_5
value: 36.76
- type: ndcg_at_1
value: 29.110000000000003
- type: ndcg_at_10
value: 38.231
- type: ndcg_at_100
value: 44.425
- type: ndcg_at_1000
value: 46.771
- type: ndcg_at_3
value: 33.095
- type: ndcg_at_5
value: 35.459
- type: precision_at_1
value: 29.110000000000003
- type: precision_at_10
value: 7.215000000000001
- type: precision_at_100
value: 1.2109999999999999
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 16.058
- type: precision_at_5
value: 11.644
- type: recall_at_1
value: 23.072
- type: recall_at_10
value: 50.285999999999994
- type: recall_at_100
value: 76.596
- type: recall_at_1000
value: 92.861
- type: recall_at_3
value: 35.702
- type: recall_at_5
value: 42.152
- type: map_at_1
value: 24.937916666666666
- type: map_at_10
value: 33.755250000000004
- type: map_at_100
value: 34.955999999999996
- type: map_at_1000
value: 35.070499999999996
- type: map_at_3
value: 30.98708333333333
- type: map_at_5
value: 32.51491666666666
- type: mrr_at_1
value: 29.48708333333333
- type: mrr_at_10
value: 37.92183333333334
- type: mrr_at_100
value: 38.76583333333333
- type: mrr_at_1000
value: 38.82466666666667
- type: mrr_at_3
value: 35.45125
- type: mrr_at_5
value: 36.827000000000005
- type: ndcg_at_1
value: 29.48708333333333
- type: ndcg_at_10
value: 39.05225
- type: ndcg_at_100
value: 44.25983333333334
- type: ndcg_at_1000
value: 46.568333333333335
- type: ndcg_at_3
value: 34.271583333333325
- type: ndcg_at_5
value: 36.483916666666666
- type: precision_at_1
value: 29.48708333333333
- type: precision_at_10
value: 6.865749999999999
- type: precision_at_100
value: 1.1195833333333332
- type: precision_at_1000
value: 0.15058333333333335
- type: precision_at_3
value: 15.742083333333333
- type: precision_at_5
value: 11.221916666666667
- type: recall_at_1
value: 24.937916666666666
- type: recall_at_10
value: 50.650416666666665
- type: recall_at_100
value: 73.55383333333334
- type: recall_at_1000
value: 89.61691666666667
- type: recall_at_3
value: 37.27808333333334
- type: recall_at_5
value: 42.99475
- type: map_at_1
value: 23.947
- type: map_at_10
value: 30.575000000000003
- type: map_at_100
value: 31.465
- type: map_at_1000
value: 31.558000000000003
- type: map_at_3
value: 28.814
- type: map_at_5
value: 29.738999999999997
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 33.415
- type: mrr_at_100
value: 34.18
- type: mrr_at_1000
value: 34.245
- type: mrr_at_3
value: 31.621
- type: mrr_at_5
value: 32.549
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 34.482
- type: ndcg_at_100
value: 38.915
- type: ndcg_at_1000
value: 41.355
- type: ndcg_at_3
value: 31.139
- type: ndcg_at_5
value: 32.589
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 5.322
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.344000000000001
- type: precision_at_5
value: 8.988
- type: recall_at_1
value: 23.947
- type: recall_at_10
value: 43.647999999999996
- type: recall_at_100
value: 63.851
- type: recall_at_1000
value: 82.0
- type: recall_at_3
value: 34.288000000000004
- type: recall_at_5
value: 38.117000000000004
- type: map_at_1
value: 16.197
- type: map_at_10
value: 22.968
- type: map_at_100
value: 24.095
- type: map_at_1000
value: 24.217
- type: map_at_3
value: 20.771
- type: map_at_5
value: 21.995
- type: mrr_at_1
value: 19.511
- type: mrr_at_10
value: 26.55
- type: mrr_at_100
value: 27.500999999999998
- type: mrr_at_1000
value: 27.578999999999997
- type: mrr_at_3
value: 24.421
- type: mrr_at_5
value: 25.604
- type: ndcg_at_1
value: 19.511
- type: ndcg_at_10
value: 27.386
- type: ndcg_at_100
value: 32.828
- type: ndcg_at_1000
value: 35.739
- type: ndcg_at_3
value: 23.405
- type: ndcg_at_5
value: 25.255
- type: precision_at_1
value: 19.511
- type: precision_at_10
value: 5.017
- type: precision_at_100
value: 0.91
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 11.023
- type: precision_at_5
value: 8.025
- type: recall_at_1
value: 16.197
- type: recall_at_10
value: 37.09
- type: recall_at_100
value: 61.778
- type: recall_at_1000
value: 82.56599999999999
- type: recall_at_3
value: 26.034000000000002
- type: recall_at_5
value: 30.762
- type: map_at_1
value: 25.41
- type: map_at_10
value: 33.655
- type: map_at_100
value: 34.892
- type: map_at_1000
value: 34.995
- type: map_at_3
value: 30.94
- type: map_at_5
value: 32.303
- type: mrr_at_1
value: 29.477999999999998
- type: mrr_at_10
value: 37.443
- type: mrr_at_100
value: 38.383
- type: mrr_at_1000
value: 38.440000000000005
- type: mrr_at_3
value: 34.949999999999996
- type: mrr_at_5
value: 36.228
- type: ndcg_at_1
value: 29.477999999999998
- type: ndcg_at_10
value: 38.769
- type: ndcg_at_100
value: 44.245000000000005
- type: ndcg_at_1000
value: 46.593
- type: ndcg_at_3
value: 33.623
- type: ndcg_at_5
value: 35.766
- type: precision_at_1
value: 29.477999999999998
- type: precision_at_10
value: 6.455
- type: precision_at_100
value: 1.032
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 14.893999999999998
- type: precision_at_5
value: 10.485
- type: recall_at_1
value: 25.41
- type: recall_at_10
value: 50.669
- type: recall_at_100
value: 74.084
- type: recall_at_1000
value: 90.435
- type: recall_at_3
value: 36.679
- type: recall_at_5
value: 41.94
- type: map_at_1
value: 23.339
- type: map_at_10
value: 31.852000000000004
- type: map_at_100
value: 33.411
- type: map_at_1000
value: 33.62
- type: map_at_3
value: 28.929
- type: map_at_5
value: 30.542
- type: mrr_at_1
value: 28.063
- type: mrr_at_10
value: 36.301
- type: mrr_at_100
value: 37.288
- type: mrr_at_1000
value: 37.349
- type: mrr_at_3
value: 33.663
- type: mrr_at_5
value: 35.165
- type: ndcg_at_1
value: 28.063
- type: ndcg_at_10
value: 37.462
- type: ndcg_at_100
value: 43.620999999999995
- type: ndcg_at_1000
value: 46.211
- type: ndcg_at_3
value: 32.68
- type: ndcg_at_5
value: 34.981
- type: precision_at_1
value: 28.063
- type: precision_at_10
value: 7.1739999999999995
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 15.217
- type: precision_at_5
value: 11.265
- type: recall_at_1
value: 23.339
- type: recall_at_10
value: 48.376999999999995
- type: recall_at_100
value: 76.053
- type: recall_at_1000
value: 92.455
- type: recall_at_3
value: 34.735
- type: recall_at_5
value: 40.71
- type: map_at_1
value: 18.925
- type: map_at_10
value: 26.017000000000003
- type: map_at_100
value: 27.034000000000002
- type: map_at_1000
value: 27.156000000000002
- type: map_at_3
value: 23.604
- type: map_at_5
value: 24.75
- type: mrr_at_1
value: 20.333000000000002
- type: mrr_at_10
value: 27.915
- type: mrr_at_100
value: 28.788000000000004
- type: mrr_at_1000
value: 28.877999999999997
- type: mrr_at_3
value: 25.446999999999996
- type: mrr_at_5
value: 26.648
- type: ndcg_at_1
value: 20.333000000000002
- type: ndcg_at_10
value: 30.673000000000002
- type: ndcg_at_100
value: 35.618
- type: ndcg_at_1000
value: 38.517
- type: ndcg_at_3
value: 25.71
- type: ndcg_at_5
value: 27.679
- type: precision_at_1
value: 20.333000000000002
- type: precision_at_10
value: 4.9910000000000005
- type: precision_at_100
value: 0.8130000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 11.029
- type: precision_at_5
value: 7.8740000000000006
- type: recall_at_1
value: 18.925
- type: recall_at_10
value: 43.311
- type: recall_at_100
value: 66.308
- type: recall_at_1000
value: 87.49
- type: recall_at_3
value: 29.596
- type: recall_at_5
value: 34.245
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.714
- type: map_at_10
value: 23.194
- type: map_at_100
value: 24.976000000000003
- type: map_at_1000
value: 25.166
- type: map_at_3
value: 19.709
- type: map_at_5
value: 21.523999999999997
- type: mrr_at_1
value: 30.619000000000003
- type: mrr_at_10
value: 42.563
- type: mrr_at_100
value: 43.386
- type: mrr_at_1000
value: 43.423
- type: mrr_at_3
value: 39.555
- type: mrr_at_5
value: 41.268
- type: ndcg_at_1
value: 30.619000000000003
- type: ndcg_at_10
value: 31.836
- type: ndcg_at_100
value: 38.652
- type: ndcg_at_1000
value: 42.088
- type: ndcg_at_3
value: 26.733
- type: ndcg_at_5
value: 28.435
- type: precision_at_1
value: 30.619000000000003
- type: precision_at_10
value: 9.751999999999999
- type: precision_at_100
value: 1.71
- type: precision_at_1000
value: 0.23500000000000001
- type: precision_at_3
value: 19.935
- type: precision_at_5
value: 14.984
- type: recall_at_1
value: 13.714
- type: recall_at_10
value: 37.26
- type: recall_at_100
value: 60.546
- type: recall_at_1000
value: 79.899
- type: recall_at_3
value: 24.325
- type: recall_at_5
value: 29.725
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.462
- type: map_at_10
value: 18.637
- type: map_at_100
value: 26.131999999999998
- type: map_at_1000
value: 27.607
- type: map_at_3
value: 13.333
- type: map_at_5
value: 15.654000000000002
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.32600000000001
- type: mrr_at_100
value: 74.60900000000001
- type: mrr_at_1000
value: 74.62
- type: mrr_at_3
value: 72.667
- type: mrr_at_5
value: 73.817
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.028999999999996
- type: ndcg_at_100
value: 44.199
- type: ndcg_at_1000
value: 51.629999999999995
- type: ndcg_at_3
value: 44.113
- type: ndcg_at_5
value: 41.731
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 31.900000000000002
- type: precision_at_100
value: 10.043000000000001
- type: precision_at_1000
value: 1.926
- type: precision_at_3
value: 47.417
- type: precision_at_5
value: 40.65
- type: recall_at_1
value: 8.462
- type: recall_at_10
value: 24.293
- type: recall_at_100
value: 50.146
- type: recall_at_1000
value: 74.034
- type: recall_at_3
value: 14.967
- type: recall_at_5
value: 18.682000000000002
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.84499999999999
- type: f1
value: 42.48106691979349
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.034
- type: map_at_10
value: 82.76
- type: map_at_100
value: 82.968
- type: map_at_1000
value: 82.98299999999999
- type: map_at_3
value: 81.768
- type: map_at_5
value: 82.418
- type: mrr_at_1
value: 80.048
- type: mrr_at_10
value: 87.64999999999999
- type: mrr_at_100
value: 87.712
- type: mrr_at_1000
value: 87.713
- type: mrr_at_3
value: 87.01100000000001
- type: mrr_at_5
value: 87.466
- type: ndcg_at_1
value: 80.048
- type: ndcg_at_10
value: 86.643
- type: ndcg_at_100
value: 87.361
- type: ndcg_at_1000
value: 87.606
- type: ndcg_at_3
value: 85.137
- type: ndcg_at_5
value: 86.016
- type: precision_at_1
value: 80.048
- type: precision_at_10
value: 10.372
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 32.638
- type: precision_at_5
value: 20.177
- type: recall_at_1
value: 74.034
- type: recall_at_10
value: 93.769
- type: recall_at_100
value: 96.569
- type: recall_at_1000
value: 98.039
- type: recall_at_3
value: 89.581
- type: recall_at_5
value: 91.906
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.5
- type: map_at_10
value: 32.857
- type: map_at_100
value: 34.589
- type: map_at_1000
value: 34.778
- type: map_at_3
value: 29.160999999999998
- type: map_at_5
value: 31.033
- type: mrr_at_1
value: 40.123
- type: mrr_at_10
value: 48.776
- type: mrr_at_100
value: 49.495
- type: mrr_at_1000
value: 49.539
- type: mrr_at_3
value: 46.605000000000004
- type: mrr_at_5
value: 47.654
- type: ndcg_at_1
value: 40.123
- type: ndcg_at_10
value: 40.343
- type: ndcg_at_100
value: 46.56
- type: ndcg_at_1000
value: 49.777
- type: ndcg_at_3
value: 37.322
- type: ndcg_at_5
value: 37.791000000000004
- type: precision_at_1
value: 40.123
- type: precision_at_10
value: 11.08
- type: precision_at_100
value: 1.752
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 24.897
- type: precision_at_5
value: 17.809
- type: recall_at_1
value: 20.5
- type: recall_at_10
value: 46.388
- type: recall_at_100
value: 69.552
- type: recall_at_1000
value: 89.011
- type: recall_at_3
value: 33.617999999999995
- type: recall_at_5
value: 38.211
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.135999999999996
- type: map_at_10
value: 61.673
- type: map_at_100
value: 62.562
- type: map_at_1000
value: 62.62
- type: map_at_3
value: 58.467999999999996
- type: map_at_5
value: 60.463
- type: mrr_at_1
value: 78.271
- type: mrr_at_10
value: 84.119
- type: mrr_at_100
value: 84.29299999999999
- type: mrr_at_1000
value: 84.299
- type: mrr_at_3
value: 83.18900000000001
- type: mrr_at_5
value: 83.786
- type: ndcg_at_1
value: 78.271
- type: ndcg_at_10
value: 69.935
- type: ndcg_at_100
value: 73.01299999999999
- type: ndcg_at_1000
value: 74.126
- type: ndcg_at_3
value: 65.388
- type: ndcg_at_5
value: 67.906
- type: precision_at_1
value: 78.271
- type: precision_at_10
value: 14.562
- type: precision_at_100
value: 1.6969999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 41.841
- type: precision_at_5
value: 27.087
- type: recall_at_1
value: 39.135999999999996
- type: recall_at_10
value: 72.809
- type: recall_at_100
value: 84.86200000000001
- type: recall_at_1000
value: 92.208
- type: recall_at_3
value: 62.76199999999999
- type: recall_at_5
value: 67.718
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.60600000000001
- type: ap
value: 86.6579587804335
- type: f1
value: 90.5938853929307
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.852
- type: map_at_10
value: 33.982
- type: map_at_100
value: 35.116
- type: map_at_1000
value: 35.167
- type: map_at_3
value: 30.134
- type: map_at_5
value: 32.340999999999994
- type: mrr_at_1
value: 22.479
- type: mrr_at_10
value: 34.594
- type: mrr_at_100
value: 35.672
- type: mrr_at_1000
value: 35.716
- type: mrr_at_3
value: 30.84
- type: mrr_at_5
value: 32.998
- type: ndcg_at_1
value: 22.493
- type: ndcg_at_10
value: 40.833000000000006
- type: ndcg_at_100
value: 46.357
- type: ndcg_at_1000
value: 47.637
- type: ndcg_at_3
value: 32.995999999999995
- type: ndcg_at_5
value: 36.919000000000004
- type: precision_at_1
value: 22.493
- type: precision_at_10
value: 6.465999999999999
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.030999999999999
- type: precision_at_5
value: 10.413
- type: recall_at_1
value: 21.852
- type: recall_at_10
value: 61.934999999999995
- type: recall_at_100
value: 87.611
- type: recall_at_1000
value: 97.441
- type: recall_at_3
value: 40.583999999999996
- type: recall_at_5
value: 49.992999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.36069311445507
- type: f1
value: 93.16456330371453
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.74692202462381
- type: f1
value: 58.17903579421599
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.80833893745796
- type: f1
value: 72.70786592684664
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69872225958305
- type: f1
value: 78.61626934504731
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.058658628717694
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.85561739360599
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.290259910144385
- type: mrr
value: 32.44223046102856
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.288
- type: map_at_10
value: 12.267999999999999
- type: map_at_100
value: 15.557000000000002
- type: map_at_1000
value: 16.98
- type: map_at_3
value: 8.866
- type: map_at_5
value: 10.418
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 52.681
- type: mrr_at_100
value: 53.315999999999995
- type: mrr_at_1000
value: 53.357
- type: mrr_at_3
value: 51.393
- type: mrr_at_5
value: 51.903999999999996
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.305
- type: ndcg_at_100
value: 30.825999999999997
- type: ndcg_at_1000
value: 39.393
- type: ndcg_at_3
value: 39.931
- type: ndcg_at_5
value: 37.519999999999996
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.728
- type: precision_at_100
value: 7.932
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.184000000000005
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.288
- type: recall_at_10
value: 16.195
- type: recall_at_100
value: 31.135
- type: recall_at_1000
value: 61.531000000000006
- type: recall_at_3
value: 10.313
- type: recall_at_5
value: 12.754999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.216
- type: map_at_10
value: 42.588
- type: map_at_100
value: 43.702999999999996
- type: map_at_1000
value: 43.739
- type: map_at_3
value: 38.177
- type: map_at_5
value: 40.754000000000005
- type: mrr_at_1
value: 31.866
- type: mrr_at_10
value: 45.189
- type: mrr_at_100
value: 46.056000000000004
- type: mrr_at_1000
value: 46.081
- type: mrr_at_3
value: 41.526999999999994
- type: mrr_at_5
value: 43.704
- type: ndcg_at_1
value: 31.837
- type: ndcg_at_10
value: 50.178
- type: ndcg_at_100
value: 54.98800000000001
- type: ndcg_at_1000
value: 55.812
- type: ndcg_at_3
value: 41.853
- type: ndcg_at_5
value: 46.153
- type: precision_at_1
value: 31.837
- type: precision_at_10
value: 8.43
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.911000000000001
- type: recall_at_1
value: 28.216
- type: recall_at_10
value: 70.8
- type: recall_at_100
value: 91.857
- type: recall_at_1000
value: 97.941
- type: recall_at_3
value: 49.196
- type: recall_at_5
value: 59.072
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.22800000000001
- type: map_at_10
value: 85.115
- type: map_at_100
value: 85.72
- type: map_at_1000
value: 85.737
- type: map_at_3
value: 82.149
- type: map_at_5
value: 84.029
- type: mrr_at_1
value: 81.96
- type: mrr_at_10
value: 88.00200000000001
- type: mrr_at_100
value: 88.088
- type: mrr_at_1000
value: 88.089
- type: mrr_at_3
value: 87.055
- type: mrr_at_5
value: 87.715
- type: ndcg_at_1
value: 82.01
- type: ndcg_at_10
value: 88.78
- type: ndcg_at_100
value: 89.91
- type: ndcg_at_1000
value: 90.013
- type: ndcg_at_3
value: 85.957
- type: ndcg_at_5
value: 87.56
- type: precision_at_1
value: 82.01
- type: precision_at_10
value: 13.462
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.732000000000003
- type: recall_at_1
value: 71.22800000000001
- type: recall_at_10
value: 95.69
- type: recall_at_100
value: 99.531
- type: recall_at_1000
value: 99.98
- type: recall_at_3
value: 87.632
- type: recall_at_5
value: 92.117
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 52.31768034366916
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.640266772723606
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.7780000000000005
- type: map_at_10
value: 12.299
- type: map_at_100
value: 14.363000000000001
- type: map_at_1000
value: 14.71
- type: map_at_3
value: 8.738999999999999
- type: map_at_5
value: 10.397
- type: mrr_at_1
value: 23.599999999999998
- type: mrr_at_10
value: 34.845
- type: mrr_at_100
value: 35.916
- type: mrr_at_1000
value: 35.973
- type: mrr_at_3
value: 31.7
- type: mrr_at_5
value: 33.535
- type: ndcg_at_1
value: 23.599999999999998
- type: ndcg_at_10
value: 20.522000000000002
- type: ndcg_at_100
value: 28.737000000000002
- type: ndcg_at_1000
value: 34.596
- type: ndcg_at_3
value: 19.542
- type: ndcg_at_5
value: 16.958000000000002
- type: precision_at_1
value: 23.599999999999998
- type: precision_at_10
value: 10.67
- type: precision_at_100
value: 2.259
- type: precision_at_1000
value: 0.367
- type: precision_at_3
value: 18.333
- type: precision_at_5
value: 14.879999999999999
- type: recall_at_1
value: 4.7780000000000005
- type: recall_at_10
value: 21.617
- type: recall_at_100
value: 45.905
- type: recall_at_1000
value: 74.42
- type: recall_at_3
value: 11.148
- type: recall_at_5
value: 15.082999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.22372750297885
- type: cos_sim_spearman
value: 79.40972617119405
- type: euclidean_pearson
value: 80.6101072020434
- type: euclidean_spearman
value: 79.53844217225202
- type: manhattan_pearson
value: 80.57265975286111
- type: manhattan_spearman
value: 79.46335611792958
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.43713315520749
- type: cos_sim_spearman
value: 77.44128693329532
- type: euclidean_pearson
value: 81.63869928101123
- type: euclidean_spearman
value: 77.29512977961515
- type: manhattan_pearson
value: 81.63704185566183
- type: manhattan_spearman
value: 77.29909412738657
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.59451537860527
- type: cos_sim_spearman
value: 82.97994638856723
- type: euclidean_pearson
value: 82.89478688288412
- type: euclidean_spearman
value: 83.58740751053104
- type: manhattan_pearson
value: 82.69140840941608
- type: manhattan_spearman
value: 83.33665956040555
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.00756527711764
- type: cos_sim_spearman
value: 81.83560996841379
- type: euclidean_pearson
value: 82.07684151976518
- type: euclidean_spearman
value: 82.00913052060511
- type: manhattan_pearson
value: 82.05690778488794
- type: manhattan_spearman
value: 82.02260252019525
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.13710262895447
- type: cos_sim_spearman
value: 87.26412811156248
- type: euclidean_pearson
value: 86.94151453230228
- type: euclidean_spearman
value: 87.5363796699571
- type: manhattan_pearson
value: 86.86989424083748
- type: manhattan_spearman
value: 87.47315940781353
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.0230597603627
- type: cos_sim_spearman
value: 84.93344499318864
- type: euclidean_pearson
value: 84.23754743431141
- type: euclidean_spearman
value: 85.09707376597099
- type: manhattan_pearson
value: 84.04325160987763
- type: manhattan_spearman
value: 84.89353071339909
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.75620824563921
- type: cos_sim_spearman
value: 87.15065513706398
- type: euclidean_pearson
value: 88.26281533633521
- type: euclidean_spearman
value: 87.51963738643983
- type: manhattan_pearson
value: 88.25599267618065
- type: manhattan_spearman
value: 87.58048736047483
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.74645319195137
- type: cos_sim_spearman
value: 65.29996325037214
- type: euclidean_pearson
value: 67.04297794086443
- type: euclidean_spearman
value: 65.43841726694343
- type: manhattan_pearson
value: 67.39459955690904
- type: manhattan_spearman
value: 65.92864704413651
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.31291020270801
- type: cos_sim_spearman
value: 85.86473738688068
- type: euclidean_pearson
value: 85.65537275064152
- type: euclidean_spearman
value: 86.13087454209642
- type: manhattan_pearson
value: 85.43946955047609
- type: manhattan_spearman
value: 85.91568175344916
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.93798118350695
- type: mrr
value: 95.93536274908824
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.594
- type: map_at_10
value: 66.81899999999999
- type: map_at_100
value: 67.368
- type: map_at_1000
value: 67.4
- type: map_at_3
value: 64.061
- type: map_at_5
value: 65.47
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.219
- type: mrr_at_100
value: 68.655
- type: mrr_at_1000
value: 68.684
- type: mrr_at_3
value: 66.22200000000001
- type: mrr_at_5
value: 67.289
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.275
- type: ndcg_at_100
value: 73.642
- type: ndcg_at_1000
value: 74.373
- type: ndcg_at_3
value: 66.521
- type: ndcg_at_5
value: 68.581
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.433
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.556
- type: precision_at_5
value: 16.8
- type: recall_at_1
value: 57.594
- type: recall_at_10
value: 83.622
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.64399999999999
- type: recall_at_5
value: 75.983
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85841584158416
- type: cos_sim_ap
value: 96.66996142314342
- type: cos_sim_f1
value: 92.83208020050125
- type: cos_sim_precision
value: 93.06532663316584
- type: cos_sim_recall
value: 92.60000000000001
- type: dot_accuracy
value: 99.85841584158416
- type: dot_ap
value: 96.6775307676576
- type: dot_f1
value: 92.69289729177312
- type: dot_precision
value: 94.77533960292581
- type: dot_recall
value: 90.7
- type: euclidean_accuracy
value: 99.86138613861387
- type: euclidean_ap
value: 96.6338454403108
- type: euclidean_f1
value: 92.92214357937311
- type: euclidean_precision
value: 93.96728016359918
- type: euclidean_recall
value: 91.9
- type: manhattan_accuracy
value: 99.86237623762376
- type: manhattan_ap
value: 96.60370449645053
- type: manhattan_f1
value: 92.91177970423253
- type: manhattan_precision
value: 94.7970863683663
- type: manhattan_recall
value: 91.10000000000001
- type: max_accuracy
value: 99.86237623762376
- type: max_ap
value: 96.6775307676576
- type: max_f1
value: 92.92214357937311
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 60.77977058695198
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.2725272535638
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 53.64052466362125
- type: mrr
value: 54.533067014684654
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.677624219206578
- type: cos_sim_spearman
value: 30.121368518123447
- type: dot_pearson
value: 30.69870088041608
- type: dot_spearman
value: 29.61284927093751
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.855
- type: map_at_100
value: 9.885
- type: map_at_1000
value: 23.416999999999998
- type: map_at_3
value: 0.637
- type: map_at_5
value: 1.024
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.067
- type: mrr_at_100
value: 93.067
- type: mrr_at_1000
value: 93.067
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 93.067
- type: ndcg_at_1
value: 82.0
- type: ndcg_at_10
value: 75.899
- type: ndcg_at_100
value: 55.115
- type: ndcg_at_1000
value: 48.368
- type: ndcg_at_3
value: 79.704
- type: ndcg_at_5
value: 78.39699999999999
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 79.60000000000001
- type: precision_at_100
value: 56.06
- type: precision_at_1000
value: 21.206
- type: precision_at_3
value: 84.667
- type: precision_at_5
value: 83.2
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.078
- type: recall_at_100
value: 13.297
- type: recall_at_1000
value: 44.979
- type: recall_at_3
value: 0.6689999999999999
- type: recall_at_5
value: 1.106
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.258
- type: map_at_10
value: 10.439
- type: map_at_100
value: 16.89
- type: map_at_1000
value: 18.407999999999998
- type: map_at_3
value: 5.668
- type: map_at_5
value: 7.718
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 51.159
- type: mrr_at_100
value: 51.714000000000006
- type: mrr_at_1000
value: 51.714000000000006
- type: mrr_at_3
value: 47.959
- type: mrr_at_5
value: 50.407999999999994
- type: ndcg_at_1
value: 29.592000000000002
- type: ndcg_at_10
value: 26.037
- type: ndcg_at_100
value: 37.924
- type: ndcg_at_1000
value: 49.126999999999995
- type: ndcg_at_3
value: 30.631999999999998
- type: ndcg_at_5
value: 28.571
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 22.857
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 1.529
- type: precision_at_3
value: 34.014
- type: precision_at_5
value: 29.796
- type: recall_at_1
value: 2.258
- type: recall_at_10
value: 16.554
- type: recall_at_100
value: 48.439
- type: recall_at_1000
value: 82.80499999999999
- type: recall_at_3
value: 7.283
- type: recall_at_5
value: 10.732
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.8858
- type: ap
value: 13.835684144362109
- type: f1
value: 53.803351693244586
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.50650820599886
- type: f1
value: 60.84357825979259
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.52131044852134
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59337187816654
- type: cos_sim_ap
value: 73.23925826533437
- type: cos_sim_f1
value: 67.34693877551021
- type: cos_sim_precision
value: 62.40432237730752
- type: cos_sim_recall
value: 73.13984168865434
- type: dot_accuracy
value: 85.31322644096085
- type: dot_ap
value: 72.30723963807422
- type: dot_f1
value: 66.47051612112296
- type: dot_precision
value: 62.0792305930845
- type: dot_recall
value: 71.53034300791556
- type: euclidean_accuracy
value: 85.61125350181797
- type: euclidean_ap
value: 73.32843720487845
- type: euclidean_f1
value: 67.36549633745895
- type: euclidean_precision
value: 64.60755813953489
- type: euclidean_recall
value: 70.36939313984169
- type: manhattan_accuracy
value: 85.63509566668654
- type: manhattan_ap
value: 73.16658488311325
- type: manhattan_f1
value: 67.20597386434349
- type: manhattan_precision
value: 63.60424028268551
- type: manhattan_recall
value: 71.2401055408971
- type: max_accuracy
value: 85.63509566668654
- type: max_ap
value: 73.32843720487845
- type: max_f1
value: 67.36549633745895
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.33779640625606
- type: cos_sim_ap
value: 84.83868375898157
- type: cos_sim_f1
value: 77.16506154017773
- type: cos_sim_precision
value: 74.62064005753327
- type: cos_sim_recall
value: 79.88912842623961
- type: dot_accuracy
value: 88.02732176815307
- type: dot_ap
value: 83.95089283763002
- type: dot_f1
value: 76.29635101196631
- type: dot_precision
value: 73.31771720613288
- type: dot_recall
value: 79.52725592854944
- type: euclidean_accuracy
value: 88.44452206310397
- type: euclidean_ap
value: 84.98384576824827
- type: euclidean_f1
value: 77.29311047696697
- type: euclidean_precision
value: 74.51232583065381
- type: euclidean_recall
value: 80.28949799815214
- type: manhattan_accuracy
value: 88.47362906042613
- type: manhattan_ap
value: 84.91421462218432
- type: manhattan_f1
value: 77.05107637204792
- type: manhattan_precision
value: 74.74484256243214
- type: manhattan_recall
value: 79.50415768401602
- type: max_accuracy
value: 88.47362906042613
- type: max_ap
value: 84.98384576824827
- type: max_f1
value: 77.29311047696697
---
Finetuned using the same data & library as [WhereIsAI/UAE-Large-V1](https://huggingface.co/WhereIsAI/UAE-Large-V1)
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding can map any text to a low-dimensional dense vector which can be used for tasks like retrieval, classification, clustering, or semantic search.
And it also can be used in vector databases for LLMs.
************* 🌟**Updates**🌟 *************
- 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
- 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
| [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BEAR",
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Mubin/allmini-ai-embedding-similarity | Mubin | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:812",
"loss:MultipleNegativesRankingLoss",
"dataset:Mubin/ai-job-embedding-finetuning",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,737 | 1,737 | 21 | 1 | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets:
- Mubin/ai-job-embedding-finetuning
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:812
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Data engineering, AWS services, Big Data manipulation
sentences:
- 'Skills: SQL, PySpark, Databricks, Azure Synapse, Azure Data Factory.
Need hands-on coding
Requirements:1. Extensive knowledge of any of the big cloud services - Azure,
AWS or GCP with practical implementation (like S3, ADLS, Airflow, ADF, Lamda,
BigQuery, EC2, Fabric, Databricks or equivalent)2. Strong Hands-on experience
in SQL and Python/PySpark programming knowledge. Should be able to write code
during an interview with minimal syntax error.3. Strong foundational and architectural
knowledge of any of the data warehouses - Snowflake, Redshift. Synapse etc.4.
Should be able to drive and deliver projects with little or no guidance. Take
ownership, become a self-learner, and have leadership qualities.'
- "requirements, and general interpretation of dataMentor, teach, share knowledge\
\ and analytic techniques with your colleagues\n\nExperience And Preferred Qualifications\n\
\nMinimum of three years of relevant experience in developing analytic solutions\
\ with proficiency in SQL, Microsoft Excel, Power BI, or similar data analysis\
\ and ETL toolsBachelor's degree (B.S./B.A.) in an appropriate field from an accredited\
\ college or universityStrong verbal and written communication skills with the\
\ ability to convey highly complex technical concepts down to actionable objectives\
\ to advise stakeholders including attorneys, firm management, and firm colleaguesExperience\
\ in project management including planning, organizing, and supervising clients\
\ and colleagues towards successful project completionDemonstrated ability to\
\ learn and succeed in a fast-paced environmentExpert level of proficiency with\
\ T-SQL or equivalent including a high level of proficiency in database administrationHigh\
\ proficiency with Microsoft Excel including an ability to create pivot tables,\
\ power pivots & queries, formulas, and external data connectionsAbility to design\
\ and implement ETL solutionsExperience in developing client facing visualizations\
\ and reports using Power BI, SSRS or similar visualization tools is a plusKnowledge\
\ of coding in Python, R, DAX and/or MExperience in developing SSIS and/or SSAS\
\ solutions\n\nQualified candidates must apply online by visiting our website\
\ at www.morganlewis.com and selecting “Careers.”\n\nMorgan, Lewis & Bockius LLP\
\ is committed to \n\nPursuant to applicable state and municipal Fair Chance Laws\
\ and Ordinances, we will consider for employment qualified applicants with arrest\
\ and conviction records.\n\nCalifornia Applicants: Pursuant to the California\
\ Consumer Privacy Act, the following link contains the Firm's California Consumer\
\ Privacy Act Privacy Notice for Candidates which explains the categories of personal\
\ information that we collect and the purposes for which we use such personal\
\ information. CCPA Privacy Notice for Candidates\n\nMorgan Lewis & Bockius LLP\
\ is also \n\nIf You Are Interested In Applying For Employment With Morgan Lewis\
\ And Need Special Assistance Or An Accommodation To Use Our Website Or To Apply\
\ For a Position, Please Call Or Email The Following Contacts\n\nProfessional\
\ Staff positions – 1.888.534.5003 / [email protected] \n\nMorgan,\
\ Lewis & Bockius, LLP reasonably accommodates applicants and employees who need\
\ them to perform the essential functions of the job because of disability, religious\
\ belief, or other reason protected by applicable law. If you believe you need\
\ a reasonable accommodation during the application process, please contact Talent\
\ Acquisition at [email protected]."
- experience as a data engineer, data architect, with strong Python and SQL knowledge.
Experience with AWS services and Databricks, and ideal if they've developed data
pipelines in airflow or any streaming services (Kafka, Kinesis, etc). Expert-level
competency in Big Data manipulation and transformation, both within and outside
of a database. Need to have competency in API creation, and Machine Learning model
deployment. Experience mentoring others and can help as a field leader for newer
team members.Additional Skills & QualificationsExperience building decision-support
applications based on Data Science and Machine LearningExperience building effective,
efficient solutions in AWS, using Terraform and/or CloudFormation to build infrastructure
as codeFamiliarity with Snowflake, Airflow, and other Big Data and data pipeline
frameworksEducation, training, and certifications in engineering, computer science,
math, statistics, analytics, or cloud computing.
- source_sentence: Digital advertising, MLOps, audience segmentation
sentences:
- "experience, skills and abilities will determine where an employee is ultimately\
\ placed in the pay range.\n\nCategory/Shift\n\nSalaried Full-Time\n\nPhysical\
\ Location:\n\n6420 Poplar Avenue\n\nMemphis, TN\n\nFlexible Remote Work Schedule\n\
\nThe Job You Will Perform\n\nLead the hands-on IT development and deployment\
\ of data science and advanced analytics solutions for the North American Container\
\ (NAC) division of International Paper to support business strategies across\
\ approximately 200 packaging and specialty plants in the US and MexicoBreak down\
\ complex data science methodologies to business leaders in a way that is applicable\
\ to our North American Container business strategy.Identify opportunities for\
\ improving business performance and present identified opportunities to senior\
\ leadership; proactively driving the discovery of business value through data.Collaborate\
\ directly with NAC business partners to produce user stories, analyze source\
\ data capabilities, identify issues and opportunities, develop data models, and\
\ test and deploy innovative analytics solutions and systemsLead the application\
\ of data science techniques to analyze and interpret complex data sets, providing\
\ insights and enabling data-driven decision-making for North American ContainerLead\
\ analytics projects through agile or traditional project management methodologiesInfluence\
\ IT projects/initiatives with project managers, business leaders and other IT\
\ groups without direct reporting relationships.Work closely with IT Application\
\ Services team members to follow standards, best practices, and consultation\
\ for data engineeringRole includes: Data analysis, predictive and prescriptive\
\ modeling, machine learning, and algorithm development; collaborating and cross-training\
\ with analytics and visualization teams.Under general direction works on complex\
\ technical issues/problems of a large scope, impact, or importance. Independently\
\ resolves complex problems that have significant cost. Leads new technology innovations\
\ that define new “frontiers” in technical direction\n\nThe Skills You Will Bring\
\ \n\nBachelor’s degree in Computer Science, Information Technology, Statistics,\
\ or a related field is required. A Masters degree and/or PhD is preferred.Minimum\
\ 12 years of relevant work experience, less if holding a Masters or PhD.Skills\
\ with Data Visualization using tools like Microsoft Power BIDemonstrated leadership\
\ in building and deploying advanced analytics models for solving real business\
\ problems.Strong Interpersonal and Communication SkillsAdaptable to a changing\
\ work environment and dealing with ambiguity as it arises. Data Science Skills:Data\
\ analysisPredictive and Prescriptive ModelingMachine Learning (Python / R)Artificial\
\ Intelligence and Large Language ModelsAlgorithm DevelopmentExperience with Azure\
\ Analytics ServicesCompetencies:Dealing with AmbiguityFunctional / Technical\
\ Skills Problem SolvingCreativity\nThe Benefits You Will Enjoy\n\nPaid time off\
\ including Vacation and Holidays Retirement and 401k Matching ProgramMedical\
\ & Dental Education & Development (including Tuition Reimbursement)Life & Disability\
\ Insurance\n\nThe Career You Will Build\n\nLeadership trainingPromotional opportunities\n\
\nThe Impact You Will Make\n\nWe continue to build a better future for people,\
\ the plant, and our company! IP has been a good steward of sustainable practices\
\ across communities around the world for more than 120 years. Join our team and\
\ you’ll see why our team members say they’re Proud to be IP.\n\nThe Culture You\
\ Will Experience\n\nInternational Paper promotes employee well-being by providing\
\ safe, caring and inclusive workplaces. You will learn Safety Leadership Principles\
\ and have the opportunity to opt into Employee Networking Circles such as IPVets,\
\ IPride, Women in IP, and the African American ENC. We invite you to bring your\
\ uniqueness, creativity, talents, experiences, and safety mindset to be a part\
\ of our increasingly diverse culture.\n\nThe Company You Will Join\n\nInternational\
\ Paper (NYSE: IP) is a leading global supplier of renewable fiber-based products.\
\ We produce corrugated packaging products that protect and promote goods, and\
\ enable worldwide commerce, and pulp for diapers, tissue and other personal care\
\ products that promote health and wellness. Headquartered in Memphis, Tenn.,\
\ we employ approximately 38,000 colleagues globally. We serve customers worldwide,\
\ with manufacturing operations in North America, Latin America, North Africa\
\ and Europe. Net sales for 2021 were $19.4 billion. Additional information can\
\ be found by visiting InternationalPaper.com.\n\nInternational Paper is an Equal\
\ Opportunity/Affirmative Action Employer. All qualified applicants will receive\
\ consideration for employment without regard to sex, gender identity, sexual\
\ orientation, race, color, religion, national origin, disability, protected veteran\
\ status, age, or any other characteristic protected by law."
- 'experience, education, geographic location, and other factors. Description: This
role is within an organization responsible for developing and maintaining a high-performance
Advertising Platform across various online properties, including streaming services.
The Ad Platform Research team focuses on transforming advertising with data and
AI, seeking a lead machine learning engineer to develop prediction and optimization
engines for addressable ad platforms.
Key responsibilities include driving innovation, developing scalable solutions,
collaborating with teams, and mentoring. Preferred qualifications include experience
in digital advertising, knowledge of ML operations, and proficiency in relevant
technologies like PyTorch and TensorFlow.
Basic Qualifications:MS or PhD in computer science or EE.4+ years of working experience
on machine learning, and statistics in leading internet companies.Experience in
the advertising domain is preferred.Solid understanding of ML technologies, mathematics,
and statistics.Proficient with Java, Python, Scala, Spark, SQL, large scale ML/DL
platforms and processing tech stack.
Preferred Qualifications:Experience in digital video advertising or digital marketing
domain.Experience with feature store, audience segmentation and MLOps.Experience
with Pytorch, TensorFlow, Kubeflow, SageMaker or Databricks.
If you are interested in this role, then please click APPLY NOW. For other opportunities
available at Akkodis, or any questions, please contact Amit Kumar Singh at [email protected].
Equal Opportunity Employer/Veterans/Disabled
Benefit offerings include medical, dental, vision, term life insurance, short-term
disability insurance, additional voluntary benefits, commuter benefits, and a
401K plan. Our program provides employees the flexibility to choose the type of
coverage that meets their individual needs. Available paid leave may include Paid
Sick Leave, where required by law; any other paid leave required by Federal, State,
or local law; and Holiday pay upon meeting eligibility criteria. Disclaimer: These
benefit offerings do not apply to client-recruited jobs and jobs which are direct
hire to a client.
To read our Candidate Privacy Information Statement, which explains how we will
use your information, please visit https://www.akkodis.com/en/privacy-policy.'
- 'Qualifications
Master''s degree is preferred in a Technical Field, Computer Science, Information
Technology, or Business ManagementGood understanding of data structures and algorithms,
ETL processing, large-scale data and machine-learning production, data and computing
infrastructure, automation and workflow orchestration.Hands-on experience in Python,
Pyspark, SQL, and shell scripting or similar programming languagesHands-on Experience
in using cloud-based technologies throughout data and machine learning product
development.Hands-on experience with code versioning, automation and workflow
orchestration tools such as Github, Ansible, SLURM, Airflow and TerraformGood
Understanding of data warehousing concepts such as data migration and data integration
in Amazon Web Services (AWS) cloud or similar platformExcellent debugging and
code-reading skills.Documentation and structured programming to support sustainable
development.Ability to describe challenges and solutions in both technical and
business terms.Ability to develop and maintain excellent working relationships
at all organizational levels.'
- source_sentence: Geospatial data management, spatial analysis, PostGIS expertise
sentences:
- 'experiences, revenue generation, ad targeting, and other business outcomes.Conduct
data processing and analysis to uncover hidden patterns, correlations, and insights.Design
and implement A/B testing frameworks to test model quality and effectiveness.Collaborate
with engineering and product development teams to integrate data science solutions
into our products and services.Stay up-to-date with the latest technologies and
techniques in data science, machine learning, and artificial intelligence.
Technical Requirements:Strong proficiency in programming languages such as Python
or R for data analysis and modeling.Extensive experience with machine learning
techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc.Knowledge
of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, etc.).Experience with data visualization tools (e.g., Matplotlib,
Seaborn, Tableau).Familiarity with big data frameworks and tools (e.g., Hadoop,
Spark).Proficient in using query languages such as SQL.Experience with cloud computing
platforms (AWS, Azure, or Google Cloud) is a plus.Understanding of software development
practices and tools, including version control (Git).
Experience:3+ years of experience in a Data Scientist or similar role.Demonstrated
success in developing and deploying data models, algorithms, and predictive analytics
solutions.Experience working with large, complex datasets and solving analytical
problems using quantitative approaches.
Who You Are:Analytically minded with a passion for uncovering insights through
data analysis.Creative problem solver who is eager to tackle complex challenges.Excellent
communicator capable of explaining complex technical concepts to non-technical
stakeholders.Self-motivated and able to work independently in a remote environment.A
collaborative team player who thrives in a dynamic, fast-paced setting.
Join Us:At RTeams, you''ll be part of an innovative company that values the transformative
power of data. Enjoy the flexibility of remote work across the US, with standard
working hours that support work-life balance. Here, we believe in empowering our
team members to innovate, explore, and make a significant impact.'
- "Skills:Intermediate Level MS Excel (Pivot & Macros knowledge helpful)Intermediate\
\ Level MS PowerPoint (Presentation Slides & Charts)Familiarity with Data Storage\
\ platforms, directories and network drivesVBA ConceptsSQL BasicData Visualization\
\ Concepts\n\nSoft Skills:Punctuality is required due to the reporting deadlines\
\ & on time delivery of dataOrganizedTeam playerCurious & Quick Learner\n\nEducation/Experience:Associate\
\ Degree in a technical field such as computer science, computer engineering or\
\ related field required2 -3 years of experience requiredProcess certification,\
\ such as, Six Sigma, CBPP, BPM, ISO 20000, ITIL, CMMI\n\nSummary: The main function\
\ of the Data Analyst is to provide business intelligence support and supporting\
\ areas by means of both repeatable and ad hoc reporting delivery reports (charts,\
\ graphs, tables, etc.) that enable informed business decisions. \nJob"
- 'experience.Support database architecture performance and optimization.Support,
and explore new ways to monetize Galehead’s geospatial tools, including entering
new verticals.Provide as-needed support for both technical and business issues
related to geospatial tools and outputs, including coaching/training other team
members, as needed.Collaborate to develop new analytic data productsWrite and
maintain a suite of automated data processesBring your best stuff: we need the
best from everyone.
KEY REQUIREMENTS:Ability to create reproducible data processes, products, and
visualizations using Python and SQL (or similar).Strong analytical and problem
solving skills.Experience with open source geospatial processing tools including
PostGIS (or other spatial SQL), GDAL/OGR, and/or Geopandas.Communications: Effective
and thoughtful written and verbal communications. Work through issues or differing
perspectives in a concise and professional manner.Organization: Maintain focus
and extract value from the high volume of opportunities through command of the
mission and meticulous organization of information, communications, resources
and responsibilities.Collaboration: Serve as a resource to the entire team and
facilitate getting work completed cross-functionally.
PREFERED SKILLS/CAPABILITIESExperience using Postgresql including complex analytic
queries and performance considerations.Energy industry experience.Experience in
software development practices including, but not limited to Git, Jira, Agileogr/gdalpostgres/postgispython
- (pandas/geopandas)
GALEHEAD CULTURE:Accountability: Set and maintain high standards for yourself
and your coworkers.Problem-Solving: Willingness to consider problems and find/drive
a path forward. Identify and recommend solutions.Our Values:Bold: Demonstrate
a bias for action and stretching conventional boundaries with a commensurate ability
to acknowledge, define, and mitigate risk.Driven: Demonstrate an inherent motivation
to succeed, regardless of externalities.True: Demonstrate transparency at all
times, provide and accept constructive feedback.'
- source_sentence: Data analysis, statistical modeling, data visualization
sentences:
- "Skills: AWS, Spark, Adobe Analytics/AEP(Adobe Experience Platform) platform experience,\
\ Glue, Lamda, Python, Scala, EMR, Talend, PostgreSQL, Redshift\n\n Configure\
\ AEP to get the data set needed and then use spark (AWS glue ) to load data in\
\ the data lake Evaluate new use cases and design ETL technical solutions to meet\
\ requirements Develop ETL solutions to meet complex use cases\n\nAdobe Data Engineer\
\ || Remote"
- 'experience solutions and technologies.This is a hybrid position, with the ideal
candidate located near one of our regional hubs (New York, Chicago, Boston) and
able to travel to an office as needed for working sessions or team meetings.
Curinos is looking for a Senior Data Engineering Manager to lead the build and
expansion of our Retail Consumer product suite, relied on by our clients for precision
deposit analysis and optimization. Our Retail Consumer business covers the largest
suite of Curinos products and this position is a critical role within the Product
Development team, combining both hands-on technical work (architecture, roadmap,
code review, POC of new/complex methodologies) and team management.In this role,
you will lead a cross-functional Product Development team of Software, Data and
QA engineers covering all aspects of product development (UI/Middle Tier/API/Backend/ETL).
You will collaborate with product owners on business requirements and features,
work with the development team to identify scalable architecture and methodologies
needed to implement, and own the timely and error-free delivery of those features.
You will be expected to be “hands-on-keys” in this role, leading the team by example
and helping to establish and model quality software development practices as the
team, products and business continues to grow.
ResponsibilitiesBuilding and leading a Product Engineering team consisting of
Software, Data and QA EngineersModeling quality software development practices
to the team by taking on user stories and writing elegant and scalable codeConducting
code reviews and providing feedback to help team members advance their skillsLeading
the design and development of performant, extendable and maintainable product
functionality, and coaching the team on the principles of efficient and scalable
designEngaging with product owner and LOB head to understand client needs and
craft product roadmaps and requirementsProviding input into the prioritization
of features to maximize value delivered to clientsAnalyzing complex business problems
and identifying solutions and own the implementationIdentifying new technologies
and tools which could improve the efficiency and productivity of your teamWorking
with in the Agile framework to manage the team’s day-to-day activitiesUnderstanding
Curinos’ Application, API and Data Engineering platforms and effectively using
them to build product featuresUnderstanding Curinos’ SDLC and compliance processes
and ensuring the team’s adherence to them
Base Salary Range: $160,000 to $185,000 (plus bonus)
Desired Skills & Expertise6+ years professional full stack experience developing
cloud based SaaS products using Java, SPA and related technologies with a complex
backend data processing system[SW1][NS2]3+ years of experience with SQL Server
or Databricks ETL, including hands-on experience developing SQL stored procedures
and SQL-based ETL pipelines2+ Years of management experience of engineers/ICsProven
ability to grow and lead geographically dispersed and cross-functional teamsA
passion for proactively identifying opportunities to eliminate manual work within
the SDLC process and as part of product operationA commitment to building a quality
and error-free product, via implementation of unit testing, integration testing,
and data validation strategiesA desire to design and develop for scale and in
anticipation of future use casesDemonstrated intellectual curiosity and innovative
thinking with a passion for problem-solvingSelf–discipline and willingness to
learn new skills, tools and technologiesExcellent verbal and written communication
skillsAdvanced proficiency in Java (including testing frameworks like Junit) and
T-SQL (including dynamic sql and the use of control structures) is an assetExperience
using Scala is a plusExperience using a templating language like Apache Freemarker
is a plusBachelors or advanced degrees (Masters or PhD) degree, preferably in
computer science, or a related engineering field
Why work at Curinos?Competitive benefits, including a range of Financial, Health
and Lifestyle benefits to choose fromFlexible working options, including home
working, flexible hours and part time options, depending on the role requirements
– please ask!Competitive annual leave, floating holidays, volunteering days and
a day off for your birthday!Learning and development tools to assist with your
career developmentWork with industry leading Subject Matter Experts and specialist
productsRegular social events and networking opportunitiesCollaborative, supportive
culture, including an active DE&I programEmployee Assistance Program which provides
expert third-party advice on wellbeing, relationships, legal and financial matters,
as well as access to counselling services
Applying:We know that sometimes the ''perfect candidate'' doesn''t exist, and
that people can be put off applying for a job if they don''t meet all the requirements.
If you''re excited about working for us and have relevant skills or experience,
please go ahead and apply. You could be just what we need!If you need any adjustments
to support your application, such as information in alternative formats, special
requirements to access our buildings or adjusted interview formats please contact
us at [email protected] and we’ll do everything we can to help.
Inclusivity at Curinos:We believe strongly in the value of diversity and creating
supportive, inclusive environments where our colleagues can succeed. As such,
Curinosis proud to be'
- "Qualifications\n Data Science, Statistics, and Data Analytics skillsData Visualization\
\ and Data Analysis skillsExperience with machine learning algorithms and predictive\
\ modelingProficiency in programming languages such as Python or RStrong problem-solving\
\ and critical thinking abilitiesExcellent communication and presentation skillsAbility\
\ to work independently and remotelyExperience in the field of data science or\
\ related rolesBachelor's degree in Data Science, Statistics, Computer Science,\
\ or a related field"
- source_sentence: NLP algorithm development, statistical modeling, biomedical informatics
sentences:
- 'skills for this position are:Natural Language Processing (NLP)Python (Programming
Language)Statistical ModelingHigh-Performance Liquid Chromatography (HPLC)Java
Job Description:We are seeking a highly skilled NLP Scientist to develop our innovative
and cutting-edge NLP/AI solutions to empower life science. This involves working
directly with our clients, as well as cross-functional Biomedical Science, Engineering,
and Business leaders, to identify, prioritize, and develop NLP/AI and Advanced
analytics products from inception to delivery.Key requirements and design innovative
NLP/AI solutions.Develop and validate cutting-edge NLP algorithms, including large
language models tailored for healthcare and biopharma use cases.Translate complex
technical insights into accessible language for non-technical stakeholders.Mentor
junior team members, fostering a culture of continuous learning and growth.Publish
findings in peer-reviewed journals and conferences.Engage with the broader scientific
community by attending conferences, workshops, and collaborating on research projects.
Qualifications:Ph.D. or master''s degree in biomedical NLP, Computer Science,
Biomedical Informatics, Computational Linguistics, Mathematics, or other related
fieldsPublication records in leading computer science or biomedical informatics
journals and conferences are highly desirable
Regards,Guru Prasath M US IT RecruiterPSRTEK Inc.Princeton, NJ [email protected]:
609-917-9967 Ext:114'
- 'Qualifications and Experience:
Bachelor’s degree in data science, Statistics, or related field, or an equivalent
combination of education and experience.Working knowledge of Salesforce.Ability
to leverage enterprise data for advanced reporting.Proficiency in combining various
data sources for robust output.Strong knowledge of Annuity products and distribution
structure.Influencing skills and change management abilities.4-6 years of experience
in financial services.Strong organizational skills.Proven success in influencing
across business units and management levels.Confidence and ability to make effective
business decisions.Willingness to travel (less. than 10%)
Drive. Discipline. Confidence. Focus. Commitment. Learn more about working at
Athene.
Athene is a Military Friendly Employer! Learn more about how we support our Veterans.
Athene celebrates diversity, is committed to inclusion and is proud to be'
- 'Skills :
a) Azure Data Factory – Min 3 years of project experiencea. Design of pipelinesb.
Use of project with On-prem to Cloud Data Migrationc. Understanding of ETLd. Change
Data Capture from Multiple Sourcese. Job Schedulingb) Azure Data Lake – Min 3
years of project experiencea. All steps from design to deliverb. Understanding
of different Zones and design principalc) Data Modeling experience Min 5 Yearsa.
Data Mart/Warehouseb. Columnar Data design and modelingd) Reporting using PowerBI
Min 3 yearsa. Analytical Reportingb. Business Domain Modeling and data dictionary
Interested please apply to the job, looking only for W2 candidates.'
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: triplet
name: Triplet
dataset:
name: ai job validation
type: ai-job-validation
metrics:
- type: cosine_accuracy
value: 0.9702970297029703
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: ai job test
type: ai-job-test
metrics:
- type: cosine_accuracy
value: 0.9803921568627451
name: Cosine Accuracy
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the [ai-job-embedding-finetuning](https://huggingface.co/datasets/Mubin/ai-job-embedding-finetuning) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [ai-job-embedding-finetuning](https://huggingface.co/datasets/Mubin/ai-job-embedding-finetuning)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Mubin/allmini-ai-embedding-similarity")
# Run inference
sentences = [
'NLP algorithm development, statistical modeling, biomedical informatics',
"skills for this position are:Natural Language Processing (NLP)Python (Programming Language)Statistical ModelingHigh-Performance Liquid Chromatography (HPLC)Java Job Description:We are seeking a highly skilled NLP Scientist to develop our innovative and cutting-edge NLP/AI solutions to empower life science. This involves working directly with our clients, as well as cross-functional Biomedical Science, Engineering, and Business leaders, to identify, prioritize, and develop NLP/AI and Advanced analytics products from inception to delivery.Key requirements and design innovative NLP/AI solutions.Develop and validate cutting-edge NLP algorithms, including large language models tailored for healthcare and biopharma use cases.Translate complex technical insights into accessible language for non-technical stakeholders.Mentor junior team members, fostering a culture of continuous learning and growth.Publish findings in peer-reviewed journals and conferences.Engage with the broader scientific community by attending conferences, workshops, and collaborating on research projects. Qualifications:Ph.D. or master's degree in biomedical NLP, Computer Science, Biomedical Informatics, Computational Linguistics, Mathematics, or other related fieldsPublication records in leading computer science or biomedical informatics journals and conferences are highly desirable\n\nRegards,Guru Prasath M US IT RecruiterPSRTEK Inc.Princeton, NJ [email protected]: 609-917-9967 Ext:114",
'Skills :\na) Azure Data Factory – Min 3 years of project experiencea. Design of pipelinesb. Use of project with On-prem to Cloud Data Migrationc. Understanding of ETLd. Change Data Capture from Multiple Sourcese. Job Schedulingb) Azure Data Lake – Min 3 years of project experiencea. All steps from design to deliverb. Understanding of different Zones and design principalc) Data Modeling experience Min 5 Yearsa. Data Mart/Warehouseb. Columnar Data design and modelingd) Reporting using PowerBI Min 3 yearsa. Analytical Reportingb. Business Domain Modeling and data dictionary\nInterested please apply to the job, looking only for W2 candidates.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `ai-job-validation` and `ai-job-test`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | ai-job-validation | ai-job-test |
|:--------------------|:------------------|:------------|
| **cosine_accuracy** | **0.9703** | **0.9804** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### ai-job-embedding-finetuning
* Dataset: [ai-job-embedding-finetuning](https://huggingface.co/datasets/Mubin/ai-job-embedding-finetuning) at [b18b3c2](https://huggingface.co/datasets/Mubin/ai-job-embedding-finetuning/tree/b18b3c20bc31354d97bad62866da97618b6c13b7)
* Size: 812 training samples
* Columns: <code>query</code>, <code>job_description_pos</code>, and <code>job_description_neg</code>
* Approximate statistics based on the first 812 samples:
| | query | job_description_pos | job_description_neg |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.03 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 216.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 217.63 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| query | job_description_pos | job_description_neg |
|:----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Data Engineering Lead, Databricks administration, Neo4j expertise, ETL processes</code> | <code>Requirements<br><br>Experience: At least 6 years of hands-on experience in deploying production-quality code, with a strong preference for experience in Python, Java, or Scala for data processing (Python preferred).Technical Proficiency: Advanced knowledge of data-related Python packages and a profound understanding of SQL and Databricks.Graph Database Expertise: Solid grasp of Cypher and experience with graph databases like Neo4j.ETL/ELT Knowledge: Proven track record in implementing ETL (or ELT) best practices at scale and familiarity with data pipeline tools.<br><br>Preferred Qualifications<br><br>Professional experience using Python, Java, or Scala for data processing (Python preferred)<br><br>Working Conditions And Physical Requirements<br><br>Ability to work for long periods at a computer/deskStandard office environment<br><br>About The Organization<br><br>Fullsight is an integrated brand of our three primary affiliate companies – SAE Industry Technologies Consortia, SAE International and Performance Review Institute – a...</code> | <code>skills through a combination of education, work experience, and hobbies. You are excited about the complexity and challenges of creating intelligent, high-performance systems while working with a highly experienced and driven data science team.<br><br>If this described you, we are interested. You can be an integral part of a cross-disciplinary team working on highly visible projects that improve performance and grow the intelligence in our Financial Services marketing product suite. Our day-to-day work is performed in a progressive, high-tech workspace where we focus on a friendly, collaborative, and fulfilling environment.<br><br>Key Duties/Responsibilities<br><br>Leverage a richly populated feature stores to understand consumer and market behavior. 20%Implement a predictive model to determine whether a person or household is likely to open a lending or deposit account based on the advertising signals they've received. 20%Derive a set of new features that will help better understand the interplay betwe...</code> |
| <code>Snowflake data warehousing, Python design patterns, AWS tools expertise</code> | <code>Requirements:<br>- Good communication; and problem-solving abilities- Ability to work as an individual contributor; collaborating with Global team- Strong experience with Data Warehousing- OLTP, OLAP, Dimension, Facts, Data Modeling- Expertise implementing Python design patterns (Creational, Structural and Behavioral Patterns)- Expertise in Python building data application including reading, transforming; writing data sets- Strong experience in using boto3, pandas, numpy, pyarrow, Requests, Fast API, Asyncio, Aiohttp, PyTest, OAuth 2.0, multithreading, multiprocessing, snowflake python connector; Snowpark- Experience in Python building data APIs (Web/REST APIs)- Experience with Snowflake including SQL, Pipes, Stream, Tasks, Time Travel, Data Sharing, Query Optimization- Experience with Scripting language in Snowflake including SQL Stored Procs, Java Script Stored Procedures; Python UDFs- Understanding of Snowflake Internals; experience in integration with Reporting; UI applications- Stron...</code> | <code>skills and ability to lead detailed data analysis meetings/discussions.<br><br>Ability to work collaboratively with multi-functional and cross-border teams.<br><br>Good English communication written and spoken.<br><br>Nice to have;<br><br>Material master create experience in any of the following areas;<br><br>SAP<br><br>GGSM<br><br>SAP Data Analyst, MN/Remote - Direct Client</code> |
| <code>Cloud Data Engineering, Databricks Pyspark, Data Warehousing Design</code> | <code>Experience of Delta Lake, DWH, Data Integration, Cloud, Design and Data Modelling. Proficient in developing programs in Python and SQLExperience with Data warehouse Dimensional data modeling. Working with event based/streaming technologies to ingest and process data. Working with structured, semi structured and unstructured data. Optimize Databricks jobs for performance and scalability to handle big data workloads. Monitor and troubleshoot Databricks jobs, identify and resolve issues or bottlenecks. Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions. Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process. Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards. <br><br>Requirements: <br><br>You are:<br><br>Minimum 9+ years of experience is required. 5+ years...</code> | <code>QualificationsExpert knowledge of using and configuring GCP (Vertex), AWS, Azure Python: 5+ years of experienceMachine Learning libraries: Pytorch, JaxDevelopment tools: Bash, GitData Science frameworks: DatabricksAgile Software developmentCloud Management: Slurm, KubernetesData Logging: Weights and BiasesOrchestration, Autoscaling: Ray, ClearnML, WandB etc.<br>Optional QualificationsExperience training LLMs and VLMsML for Robotics, Computer Vision etc.Developing Browser Apps/Dashboards, both frontend and backend Javascript, React, etc. Emancro is committed to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### ai-job-embedding-finetuning
* Dataset: [ai-job-embedding-finetuning](https://huggingface.co/datasets/Mubin/ai-job-embedding-finetuning) at [b18b3c2](https://huggingface.co/datasets/Mubin/ai-job-embedding-finetuning/tree/b18b3c20bc31354d97bad62866da97618b6c13b7)
* Size: 101 evaluation samples
* Columns: <code>query</code>, <code>job_description_pos</code>, and <code>job_description_neg</code>
* Approximate statistics based on the first 101 samples:
| | query | job_description_pos | job_description_neg |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 15.78 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 220.13 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 213.07 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| query | job_description_pos | job_description_neg |
|:---------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Big Data Engineer, Spark, Hadoop, AWS/GCP</code> | <code>Skills • Expertise and hands-on experience on Spark, and Hadoop echo system components – Must Have • Good and hand-on experience* of any of the Cloud (AWS/GCP) – Must Have • Good knowledge of HiveQL & SparkQL – Must Have Good knowledge of Shell script & Java/Scala/python – Good to Have • Good knowledge of SQL – Good to Have • Good knowledge of migration projects on Hadoop – Good to Have • Good Knowledge of one of the Workflow engines like Oozie, Autosys – Good to Have Good knowledge of Agile Development– Good to Have • Passionate about exploring new technologies – Good to Have • Automation approach – Good to Have <br>Thanks & RegardsShahrukh KhanEmail: [email protected]</code> | <code>experience:<br><br>GS-14:<br><br>Supervisory/Managerial Organization Leadership<br><br>Supervises an assigned branch and its employees. The work directed involves high profile data science projects, programs, and/or initiatives within other federal agencies.Provides expert advice in the highly technical and specialized area of data science and is a key advisor to management on assigned/delegated matters related to the application of mathematics, statistical analysis, modeling/simulation, machine learning, natural language processing, and computer science from a data science perspective.Manages workforce operations, including recruitment, supervision, scheduling, development, and performance evaluations.Keeps up to date with data science developments in the private sector; seeks out best practices; and identifies and seizes opportunities for improvements in assigned data science program and project operations.<br><br><br>Senior Expert in Data Science<br><br>Recognized authority for scientific data analysis using advanc...</code> |
| <code>Time series analysis, production operations, condition-based monitoring</code> | <code>Experience in Production Operations or Well Engineering Strong scripting/programming skills (Python preferable)<br><br>Desired: <br><br> Strong time series surveillance background (eg. OSI PI, PI AF, Seeq) Strong scripting/programming skills (Python preferable) Strong communication and collaboration skills Working knowledge of machine learning application (eg. scikit-learn) Working knowledge of SQL and process historians Delivers positive results through realistic planning to accomplish goals Must be able to handle multiple concurrent tasks with an ability to prioritize and manage tasks effectively<br><br><br><br>Apex Systems is <br><br>Apex Systems is a world-class IT services company that serves thousands of clients across the globe. When you join Apex, you become part of a team that values innovation, collaboration, and continuous learning. We offer quality career resources, training, certifications, development opportunities, and a comprehensive benefits package. Our commitment to excellence is reflected in man...</code> | <code>Qualifications:· 3-5 years of experience as a hands-on analyst in an enterprise setting, leveraging Salesforce, Marketo, Dynamics, and similar tools.· Excellent written and verbal communication skills.· Experience with data enrichment processes and best practices.· Strong understanding of B2B sales & marketing for large, complex organizations.· Expertise in querying, manipulating, and analyzing data using SQL and/or similar languages.· Advanced Excel skills and experience with data platforms like Hadoop and Databricks.· Proven proficiency with a data visualization tool like Tableau or Power BI.· Strong attention to detail with data quality control and integration expertise.· Results-oriented, self-directed individual with multi-tasking, problem-solving, and independent learning abilities.· Understanding of CRM systems like Salesforce and Microsoft Dynamics.· Solid grasp of marketing practices, principles, KPIs, and data types.· Familiarity with logical data architecture and cloud data ...</code> |
| <code>Senior Data Analyst jobs with expertise in Power BI, NextGen EHR, and enterprise ETL.</code> | <code>requirements.Reporting and Dashboard Development: Design, develop, and maintain reports for the HRSA HCCN Grant and other assignments. Create and maintain complex dashboards using Microsoft Power BI.Infrastructure Oversight: Monitor and enhance the data warehouse, ensuring efficient data pipelines and timely completion of tasks.Process Improvements: Identify and implement internal process improvements, including automating manual processes and optimizing data delivery.Troubleshooting and Maintenance: Address data inconsistencies using knowledge of various database structures and workflow best practices, including NextGen EHR system.Collaboration and Mentorship: Collaborate with grant PHCs and analytic teams, mentor less senior analysts, and act as a project lead for specific deliverables.<br>Experience:Highly proficient in SQL and experienced with reporting packages.Enterprise ETL experience is a major plus!data visualization tools (e.g., Tableau, Power BI, Qualtrics).Azure, Azure Data Fa...</code> | <code>Qualifications<br><br>3 to 5 years of experience in exploratory data analysisStatistics Programming, data modeling, simulation, and mathematics Hands on working experience with Python, SQL, R, Hadoop, SAS, SPSS, Scala, AWSModel lifecycle executionTechnical writingData storytelling and technical presentation skillsResearch SkillsInterpersonal SkillsModel DevelopmentCommunicationCritical ThinkingCollaborate and Build RelationshipsInitiative with sound judgementTechnical (Big Data Analysis, Coding, Project Management, Technical Writing, etc.)Problem Solving (Responds as problems and issues are identified)Bachelor's Degree in Data Science, Statistics, Mathematics, Computers Science, Engineering, or degrees in similar quantitative fields<br><br><br>Desired Qualification(s)<br><br>Master's Degree in Data Science, Statistics, Mathematics, Computer Science, or Engineering<br><br><br>Hours: Monday - Friday, 8:00AM - 4:30PM<br><br>Locations: 820 Follin Lane, Vienna, VA 22180 | 5510 Heritage Oaks Drive, Pensacola, FL 32526 | 141 Se...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | ai-job-validation_cosine_accuracy | ai-job-test_cosine_accuracy |
|:-----:|:----:|:---------------------------------:|:---------------------------:|
| 0 | 0 | 0.9307 | - |
| 1.0 | 51 | 0.9703 | 0.9804 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CRAFT"
] | Non_BioNLP |
SEBIS/legal_t5_small_summ_es | SEBIS | text2text-generation | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"summarization Spanish model",
"dataset:jrc-acquis",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,646 | 1,654 | 44 | 0 | ---
datasets:
- jrc-acquis
language: Spanish
tags:
- summarization Spanish model
widget:
- text: '[notificada con el número C(2006) 166] (El texto en lengua portuguesa es
el único auténtico) (2006/78/CE) LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto
el Tratado constitutivo de la Comunidad Europea, Vista la Decisión 90/424/CEE
del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector
veterinario [1], y, en particular, su artículo 3, apartado 2 bis, Considerando
lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral
ovina en Portugal. La aparición de esta enfermedad puede representar un grave
riesgo para la cabaña ganadera de la Comunidad. (2) Para atajar la propagación
de la enfermedad en el plazo más breve, la Comunidad debe participar en los gastos
subvencionables que suponen para Portugal la adopción de medidas de urgencia contra
la enfermedad, en las condiciones previstas en la Decisión 90/424/CEE. Por ello,
el 15 de septiembre de 2005 se adoptó la Decisión 2005/660/CE de la Comisión relativa
a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre
catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisión ha adoptado
varias decisiones para delimitar las zonas de protección y vigilancia y fijar
las condiciones que deben cumplir los animales que vayan a salir de esas zonas;
la última de ellas es la Decisión 2005/393/CE, de 23 de mayo de 2005, sobre las
zonas de protección y vigilancia en relación con la fiebre catarral ovina y las
condiciones que se aplican a los traslados de animales desde estas zonas o a través
de ellas [3]. (4) Desde el otoño de 2004, la excepcional escasez de lluvias en
Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a
las posibilidades de alimentación animal, lo que ha conllevado costes adicionales
para los ganaderos. La situación tiene consecuencias particulares en Portugal,
pues las explotaciones especializadas en reproducción de bovinos y de ovinos están
ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados
de animales, mientras que las especializadas en engorde, que constituyen la salida
lógica de los animales criados en aquéllas, están localizadas fuera de dichas
zonas. (5) Portugal, en colaboración con España, puso en marcha otras medidas
para controlar la epidemia, como la realización de estudios epidemiológicos y
la aplicación de medidas de vigilancia de la enfermedad, incluidas las pruebas
de laboratorio para el control serológico y virológico en el marco de las pruebas
realizadas a los animales antes de su traslado y en el de la vigilancia entomológica.
(6) Portugal y España han presentado pruebas de su cooperación para evitar la
propagación de la enfermedad tomando medidas de vigilancia de la misma. (7) De
conformidad con el artículo 3, apartado 2, del Reglamento (CE) no 1258/1999 del
Consejo, de 17 de mayo de 1999, sobre la financiación de la política agrícola
común [4], las medidas veterinarias y fitosanitarias ejecutadas según las normas
comunitarias son financiadas por la sección Garantía del Fondo Europeo de Orientación
y de Garantía Agrícola. El control financiero de estas acciones debe efectuarse
de conformidad con lo dispuesto en los artículos 8 y 9 de dicho Reglamento. (8)
El pago de la contribución financiera de la Comunidad se supedita a la realización
efectiva de las acciones programadas y a la presentación por parte de las autoridades
de toda la información necesaria en los plazos establecidos. (9) El 25 de febrero
de 2005, Portugal presentó un primer cálculo de los costes de las demás medidas
de urgencia, como las de vigilancia epidemiológica, tomadas para luchar contra
la enfermedad. El importe estimado de las medidas de vigilancia epidemiológica
se eleva a 4303336 EUR. (10) A la espera de que se efectúen los controles in situ
de la Comisión, procede fijar desde ahora el importe de un primer pago de la ayuda
financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribución
de la Comunidad, establecida sobre la base del gasto subvencionable calculado
para las medidas de vigilancia epidemiológica. Procede asimismo determinar los
importes máximos que se reembolsarán en concepto de pruebas realizadas y de trampas
utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han
cumplido íntegramente sus obligaciones técnicas y administrativas relacionadas
con las medidas previstas en el artículo 3 de la Decisión 90/424/CEE. (12) Las
medidas previstas en la presente Decisión se ajustan al dictamen del Comité permanente
de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISIÓN:
Artículo 1 Concesión de una ayuda financiera de la Comunidad a Portugal 1. En
el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas
en Portugal en 2004 y 2005, Portugal tendrá derecho a una contribución comunitaria
del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para
la vigilancia serológica y virológica, así como en concepto de vigilancia entomológica,
incluida la adquisición de trampas. 2. El importe máximo de los gastos que se
reembolsarán a Portugal en concepto de las pruebas y las trampas mencionadas en
el apartado 1 no excederá de: a) vigilancia serológica, prueba ELISA: 2,5 EUR
por prueba; b) vigilancia virológica, reacción en cadena de la polimerasa retrotranscriptásica
(RT.PCR): 15 EUR por prueba; c) vigilancia entomológica, trampa: 160 EUR por trampa.
3. El impuesto sobre el valor añadido se excluirá de la participación financiera
de la Comunidad. Artículo 2 Modalidades de pago A reserva del resultado de los
controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1,
de la Decisión 90/424/CEE, se efectuará un primer pago de 600000 EUR como parte
de la ayuda financiera de la Comunidad prevista en el artículo 1. El pago se llevará
a cabo previa presentación por parte de Portugal de justificantes de las pruebas
de laboratorio y de la adquisición de las trampas mencionadas en el artículo 1,
apartado 1. Artículo 3 Condiciones de pago y documentación justificativa 1. La
ayuda financiera de la Comunidad contemplada en el artículo 1 se pagará atendiendo
a los siguientes elementos: a) una solicitud que contenga los datos especificados
en el anexo, presentada en el plazo establecido en el apartado 2 del presente
artículo; b) la documentación justificativa mencionada en el artículo 2, que incluirá
un informe epidemiológico y un informe financiero; c) el resultado de cualquiera
de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado
1, de la Decisión 90/424/CEE. Los documentos mencionados en la letra b) deberán
estar disponibles para los controles in situ mencionados en la letra c). 2. La
solicitud mencionada en el apartado 1, letra a), se presentará en formato electrónico
en un plazo de 60 días naturales a partir de la fecha de notificación de la presente
Decisión. Si no se respeta este plazo, la ayuda financiera comunitaria se reducirá
un 25 % por cada mes de retraso. Artículo 4 Destinatario El destinatario de la
presente Decisión es la República Portuguesa. Hecho en Bruselas, el 31 de enero
de 2006. Por la Comisión Markos Kyprianou Miembro de la Comisión [1] DO L 224
de 18.8.1990, p. 19. Decisión modificada en último lugar por el Reglamento (CE)
no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3]
DO L 130 de 24.5.2005, p. 22. Decisión modificada en último lugar por la Decisión
2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103.
-------------------------------------------------- ANEXO Datos mencionados en
el artículo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | Número
| Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virológicas
| | | Trampas | | | Total | | -------------------------------------------------- '
---
# legal_t5_small_summ_es model
Model for Summarization of legal text written in Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Spanish.
### How to use
Here is how to use this model to summarize legal text written in Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "[notificada con el número C(2006) 166] (El texto en lengua portuguesa es el único auténtico) (2006/78/CE) LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Vista la Decisión 90/424/CEE del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector veterinario [1], y, en particular, su artículo 3, apartado 2 bis, Considerando lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral ovina en Portugal. La aparición de esta enfermedad puede representar un grave riesgo para la cabaña ganadera de la Comunidad. (2) Para atajar la propagación de la enfermedad en el plazo más breve, la Comunidad debe participar en los gastos subvencionables que suponen para Portugal la adopción de medidas de urgencia contra la enfermedad, en las condiciones previstas en la Decisión 90/424/CEE. Por ello, el 15 de septiembre de 2005 se adoptó la Decisión 2005/660/CE de la Comisión relativa a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisión ha adoptado varias decisiones para delimitar las zonas de protección y vigilancia y fijar las condiciones que deben cumplir los animales que vayan a salir de esas zonas; la última de ellas es la Decisión 2005/393/CE, de 23 de mayo de 2005, sobre las zonas de protección y vigilancia en relación con la fiebre catarral ovina y las condiciones que se aplican a los traslados de animales desde estas zonas o a través de ellas [3]. (4) Desde el otoño de 2004, la excepcional escasez de lluvias en Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a las posibilidades de alimentación animal, lo que ha conllevado costes adicionales para los ganaderos. La situación tiene consecuencias particulares en Portugal, pues las explotaciones especializadas en reproducción de bovinos y de ovinos están ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados de animales, mientras que las especializadas en engorde, que constituyen la salida lógica de los animales criados en aquéllas, están localizadas fuera de dichas zonas. (5) Portugal, en colaboración con España, puso en marcha otras medidas para controlar la epidemia, como la realización de estudios epidemiológicos y la aplicación de medidas de vigilancia de la enfermedad, incluidas las pruebas de laboratorio para el control serológico y virológico en el marco de las pruebas realizadas a los animales antes de su traslado y en el de la vigilancia entomológica. (6) Portugal y España han presentado pruebas de su cooperación para evitar la propagación de la enfermedad tomando medidas de vigilancia de la misma. (7) De conformidad con el artículo 3, apartado 2, del Reglamento (CE) no 1258/1999 del Consejo, de 17 de mayo de 1999, sobre la financiación de la política agrícola común [4], las medidas veterinarias y fitosanitarias ejecutadas según las normas comunitarias son financiadas por la sección Garantía del Fondo Europeo de Orientación y de Garantía Agrícola. El control financiero de estas acciones debe efectuarse de conformidad con lo dispuesto en los artículos 8 y 9 de dicho Reglamento. (8) El pago de la contribución financiera de la Comunidad se supedita a la realización efectiva de las acciones programadas y a la presentación por parte de las autoridades de toda la información necesaria en los plazos establecidos. (9) El 25 de febrero de 2005, Portugal presentó un primer cálculo de los costes de las demás medidas de urgencia, como las de vigilancia epidemiológica, tomadas para luchar contra la enfermedad. El importe estimado de las medidas de vigilancia epidemiológica se eleva a 4303336 EUR. (10) A la espera de que se efectúen los controles in situ de la Comisión, procede fijar desde ahora el importe de un primer pago de la ayuda financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribución de la Comunidad, establecida sobre la base del gasto subvencionable calculado para las medidas de vigilancia epidemiológica. Procede asimismo determinar los importes máximos que se reembolsarán en concepto de pruebas realizadas y de trampas utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han cumplido íntegramente sus obligaciones técnicas y administrativas relacionadas con las medidas previstas en el artículo 3 de la Decisión 90/424/CEE. (12) Las medidas previstas en la presente Decisión se ajustan al dictamen del Comité permanente de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISIÓN: Artículo 1 Concesión de una ayuda financiera de la Comunidad a Portugal 1. En el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005, Portugal tendrá derecho a una contribución comunitaria del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para la vigilancia serológica y virológica, así como en concepto de vigilancia entomológica, incluida la adquisición de trampas. 2. El importe máximo de los gastos que se reembolsarán a Portugal en concepto de las pruebas y las trampas mencionadas en el apartado 1 no excederá de: a) vigilancia serológica, prueba ELISA: 2,5 EUR por prueba; b) vigilancia virológica, reacción en cadena de la polimerasa retrotranscriptásica (RT.PCR): 15 EUR por prueba; c) vigilancia entomológica, trampa: 160 EUR por trampa. 3. El impuesto sobre el valor añadido se excluirá de la participación financiera de la Comunidad. Artículo 2 Modalidades de pago A reserva del resultado de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE, se efectuará un primer pago de 600000 EUR como parte de la ayuda financiera de la Comunidad prevista en el artículo 1. El pago se llevará a cabo previa presentación por parte de Portugal de justificantes de las pruebas de laboratorio y de la adquisición de las trampas mencionadas en el artículo 1, apartado 1. Artículo 3 Condiciones de pago y documentación justificativa 1. La ayuda financiera de la Comunidad contemplada en el artículo 1 se pagará atendiendo a los siguientes elementos: a) una solicitud que contenga los datos especificados en el anexo, presentada en el plazo establecido en el apartado 2 del presente artículo; b) la documentación justificativa mencionada en el artículo 2, que incluirá un informe epidemiológico y un informe financiero; c) el resultado de cualquiera de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE. Los documentos mencionados en la letra b) deberán estar disponibles para los controles in situ mencionados en la letra c). 2. La solicitud mencionada en el apartado 1, letra a), se presentará en formato electrónico en un plazo de 60 días naturales a partir de la fecha de notificación de la presente Decisión. Si no se respeta este plazo, la ayuda financiera comunitaria se reducirá un 25 % por cada mes de retraso. Artículo 4 Destinatario El destinatario de la presente Decisión es la República Portuguesa. Hecho en Bruselas, el 31 de enero de 2006. Por la Comisión Markos Kyprianou Miembro de la Comisión [1] DO L 224 de 18.8.1990, p. 19. Decisión modificada en último lugar por el Reglamento (CE) no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3] DO L 130 de 24.5.2005, p. 22. Decisión modificada en último lugar por la Decisión 2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103. -------------------------------------------------- ANEXO Datos mencionados en el artículo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | Número | Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virológicas | | | Trampas | | | Total | | -------------------------------------------------- "
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_summ_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_es | 80.23|70.16 |78.69|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
| [
"TRANSLATION",
"SUMMARIZATION"
] | [
"PCR"
] | Non_BioNLP |
SNOWTEAM/sft_medico-mistral | SNOWTEAM | text-generation | [
"transformers",
"pytorch",
"mixtral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,719 | 1,721 | 17 | 1 | ---
title: Model Card for SNOWTEAM/sft_medico-mistral
summary: A specialized language model for medical applications, refined through instruction
tuning.
---
# Model Card for SNOWTEAM/sft_medico-mistral
## Overview
SNOWTEAM/sft_medico-mistral is a specialized language model designed for medical applications, further refined through instruction tuning to enhance its ability to respond to various medical-related instructions. This tuning leverages the embedded medical knowledge within the Medico-mistral model, focusing on medical consulting conversations, medical rationale QA, and medical knowledge graph prompting.
## Model Description
**Base Model:** Medico-mistral
**Model type:** Transformer-based decoder-only language model
**Language(s) (NLP):** English
## How to Get Started with the Model
```python
import transformers
import torch
model_path = "SNOWTEAM/sft_medico-mistral"
model = AutoModelForCausalLM.from_pretrained(
model_path,device_map="auto",
max_memory=max_memory_mapping,
torch_dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained("SNOWTEAM/sft_medico-mistral")
input_text = ""
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
output_ids = model.generate(input_ids=input_ids.cuda(),
max_new_tokens=300,
pad_token_id=tokenizer.eos_token_id,)
output_text = tokenizer.batch_decode(output_ids[:, input_ids.shape[1]:],skip_special_tokens=True)[0]
print(output_text)
```
### Instruction Tuning Datasets
Using open source instruction tuning datasets are composed of 4 main parts: (Some datasets are from [https://huggingface.co/datasets/axiong/pmc_llama_instructions](https://huggingface.co/datasets/axiong/pmc_llama_instructions))
1. **Medical Questions and Answering Data:**
Consider enhancing the model with the capability to reason about professional medical knowledge. We begin with training datasets from open-source medical multiple-choice question-answering sources such as USMLE (Jin, Pan et al., 2021), PubMedQA (Jin et al., 2019), and MedMCQA (Pal, Umapathi et al., 2022). While these datasets inherently require specific medical knowledge, they primarily feature simple choices and lack comprehensive reasoning guidance. To address this, researchers at PMC-llama employed ChatGPT for causal analysis (Chaoyi et al. 2023). Specifically, ChatGPT is queried to generate reasoning outputs for each QA pair, which are then structured and used as explanations.
2. **Medical Knowledge Graph Data:**
We wish to utilize medical knowledge maps such as UMLS (Lindberg, Humphreys, and McCray, 1993) to align with clinicians' expertise. Specifically, we focused on linking medical terms to their respective knowledge descriptions or correlations. For this purpose, we used the QA pair dataset of the Translation General Knowledge Graph created by PMC-LLAMA (Chaoyi et al. 2023). Medical knowledge graphs consist of two main types: entity descriptions and entity relationships. The dataset incorporates two different prompts that ask the model to provide a description of a specific entity or predict a relationship between two entities.
3. **Single-turn Medical Dialogue Data:**
In patient-doctor conversations, patients often describe their symptoms in a colloquial and brief manner. When synthetic patient-doctor conversation datasets are manually created, they tend to lack diversity and become overly specialized, making them less reflective of real-life scenarios. A more effective approach is to collect real patient-doctor conversations. The \textit{HealthCareMagic-100k} dataset addresses this by gathering approximately 100,000 genuine doctor-patient interactions from online medical advice websites. These conversations were filtered manually and automatically to remove identifiers and corrected for grammatical errors using a language tool. Additionally, around 10,000 conversations from the online medical advice website iCliniq were collected and 5k conversations between patients and doctors were generated via ChatGPT by ChatDoctor for supplementation and evaluation (Yunxiang et al. 2023).
4. **Multi-turn Medical Dialogue Data:(This data was not used to train this version of the model)**
The only model currently trained using a multi-round dialog dataset is the Zhongjing-LLaMA model(Songhua et al. 2023). This model uses the CMtMedQA dataset, which is the first large-scale multi-round TCM QA dataset suitable for LLM training, and can significantly enhance the model's multi-round QA capability. However, this dataset collects data for online QA conversations, and lacks the ability to understand pathology examination, or image examination results, which has limitations in real clinical QA situations. Therefore, we used real electronic medical record EMRs obtained from hospitals, rewritten into multi-round conversations by prompting gpt.When designing the prompts, in order to standardize the questioning process and improve the differential diagnosis accuracy, we referred to the Mini-CEX, a clinical questioning assessment index used in medical schools, and the LLM-Mini-CEX, a new criterion that has been modified specifically for large language models (Xiaoming et al. 2023).
### Medical-Specific Instruction Tuning
By combining the above three parts, we form a large-scale, high-quality, medical-specific instruction tuning dataset. We further tune Medico-mistral on this dataset, resulting in sft_medico-mistral.
## Training Details
Our model is based on Mixtral-8x7B-v0.1-Instruct, a generic English LLM with 13 billion parameters. Training was performed on 8 A100-80G GPUs via parallelization. We first inject knowledge into the base model Mistral to optimize the autoregressive loss. During training, we set the maximum context length to 4096 and the batch size to 1024. the model was trained using the AdamW optimizer (Loshchilov and Hutter, 2017) with a learning rate of 2e-5. we employed a fully-sliced data parallel (FSDP) acceleration strategy, the bf16 (brain floating-point) data format, and gradient checkpoints ( Chen et al. 2016). The model was trained using 8 A100 GPUs for 1 epoch of knowledge injection. Afterwards, we used 7 A100 GPUs to perform 5 epochs of healthcare-specific instruction tuning in the SFT phase with a batch size of 896 . During the instruction tuning phase, all sequences are processed in each epoch.
### Training Data
The training data combines diverse datasets from medical consultations, rationale QA, and knowledge graphs to ensure comprehensive medical knowledge coverage and reasoning ability.
### Result

## Model Sources
**Repository:** [https://huggingface.co/SNOWTEAM/sft_medico-mistral](https://huggingface.co/SNOWTEAM/sft_medico-mistral)
**Paper [optional]:**
**Demo [optional]:**
| [
"TRANSLATION"
] | [
"PUBMEDQA"
] | BioNLP |
m7n/discipline-bert-modern-large_v02 | m7n | sentence-similarity | [
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:7828",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,735 | 1,735 | 7 | 0 | ---
base_model: answerdotai/ModernBERT-large
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:7828
- loss:TripletLoss
widget:
- source_sentence: Pleural effusion is a frequently observed lesion in the course
of respiratory diseases such as inflammatory process and cancer metastasis. Its
cause may be either tuberculosis (the most common extrapulmonary location is the
pleura) and malignant disease of the pleura. Confirmation of tuberculosis is often
troublesome. The primary site of cancer may be als difficult to find despite the
application of difficult diagnostic methods. Below we present history of -year
old female in whom carcinomatous cells and positive result of PCR for Mycobacterium
tuberculosis in pleural fluid were discovered simultaneously suggesting the tuberculosis
and cancer of unknown primary origin.
sentences:
- Coronaviruses are a large family of viruses that cause illness ranging from mild
to severe symptoms. Coronaviruses are known to cause diseases that cause severe
symptoms such as Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory
Syndrome (SARS). This study aims to determine the factors related to compliance
with the use of personal protective equipment by health workers during the COVID-
pandemic at Bahteramas Hospital, Southeast Sulawesi province in . This study used
a case control design. The population in this study were health workers at Bahtermas
Hospital totaling health workers. The sample in this study amounted to respondents
consisting of groups of health workers. Sampling using the Lemeshow formula. The
results showed that based on the results of the chis square test, the P-value
of the knowledge variable was , the Attitude variable, a P-Value of was obtained
and the PPE availability variable was a P-Value of . From the research samples
used, it can be concluded that the Knowledge, Attitude and availability of PPE
are related to compliance with the use of PPE by health workes during the COVID-
pandemic at Bahteramas Hospital, Southeast Sulawesi Province.
- Recent developments in treatment have steadily raised the median predicted age
of survival for people with Cystic Fibrosis (CF). We report the health-related
quality of life (HRQoL) in CF adult patients and correlate our findings with the
patients' demographic characteristics.The Cystic Fibrosis Quality of Life (CFQoL)
questionnaire was answered by CF adult patients. The questionnaire included questions
pertaining to age, sex and level of education and covered eight sections of functioning.The
highest score was reported in the "Social Functioning" section, while the lowest
in the "Concerns for the Future" section. When different age groups were compared,
statistical significances were reported in "Physical Functioning", "Interpersonal
Relationships", and the "Career Concerns" section, with older patients reporting
statistically higher HRQoL scores than younger ones (p < ). No statistically significant
difference was reported amongst the scoring between male and female CF patients.
When different educational levels were compared, patients that had received a
higher educational training scored statistically higher in all but one sections
of the questionnaire when compared with patients of a lower educational level
(p < ).More than half Greek adult CF patients report that they are capable to
participate in social activities but most of them are worried about the outcome
of their disease and its effect on their lives.
- 'BACKGROUND: The global amount of investment in companies developing artificial
intelligence (AI)-based software technologies for medical diagnostics reached
million in , rose to million in , and is expected to continue growing. While software
manufacturing companies should comply with existing clinical, bioethical, legal,
and methodological frameworks and standards, there is a lack of uniform national
and international standards and protocols for testing and monitoring AI-based
software. AIM: This objective of this study is to develop a universal methodology
for testing and monitoring AI-based software for medical diagnostics, with the
aim of improving its quality and implementing its integration into practical healthcare.
MATERIALS AND METHODS: The research process involved an analytical phase in which
a literature review was conducted on the PubMed and eLibrary databases. The practical
stage included the approbation of the developed methodology within the framework
of an experiment focused on the use of innovative technologies in the field of
computer vision to analyze medical images and further application in the health
care system of the city of Moscow. RESULTS: A methodology for testing and monitoring
AI-based software for medical diagnostics has been developed, aimed at improving
its quality and introducing it into practical healthcare. The methodology consists
of seven stages: self-testing, functional testing, calibration testing, technological
monitoring, clinical monitoring, feedback, and refinement. CONCLUSION: Distinctive
features of the methodology include its cyclical stages of monitoring and software
development, leading to continuous improvement of its quality, the presence of
detailed requirements for the results of the software work, and the participation
of doctors in software evaluation. The methodology will allow software developers
to achieve significant outcomes and demonstrate achievements across various areas.
It also empowers users to make informed and confident choices among software options
that have passed an independent and comprehensive quality check.'
- source_sentence: Abstract The molecule based bilayer system composed of hard Ni
[Fe(CN) ] nH O and soft Ni [Cr(CN) ] nH O ferromagnetic Prussian blue analogues
has been fabricated on a solid substrate by "layer by layer" deposition. The structure
and morphology characterization as well as results of magnetic measurements are
described. The thickness of the bilayer is ca. nm including a nm interface. This
bilayer system shows anisotropic magnetic properties reflected in the shape of
magnetic hysteresis measured in various film orientation with respect to the direction
of external magnetic field. There is no exchange interaction between hard and
soft magnetic layer and irrespective of bilayer orientation, the magnetization
and demagnetization process of both Ni [Fe(CN) ] nH O and Ni [Cr(CN) ] nH O layers
occurs independently.
sentences:
- 'Previous articleNext article No AccessBook ReviewsThe Gospel According to Renan:
Reading, Writing, and Religion in Nineteenth-Century France. By Robert D. Priest.
Oxford Historical Monographs. Edited by P. Clavin et al.Oxford: Oxford University
Press, . Pp. xii+ . . La vie de Jesus de Renan: La fabrique d''un best-seller.
By Nathalie Richard.Rennes: Presses Universitaires de Rennes, . Pp. . .Stephane
GersonStephane GersonNew York University Search for more articles by this author
PDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints
Share onFacebookTwitterLinkedInRedditEmail SectionsMoreDetailsFiguresReferencesCited
by The Journal of Modern History Volume , Number 0September Article DOIhttps://doi.org/
. Views: 00Total views on this site For permission to reuse, please contact [email
protected]PDF download Crossref reports no articles citing this article.'
- Purpose The purpose of this paper is to present a case study describing a collaboration
with Last Mile Health, a non-governmental organization, to develop a framework
to inform its community healthcare networks in remote Liberia. Design/methodology/approach
The authors detail the process of using the unique problem setting and available
data to inform modeling and solution approaches. Findings The authors show how
the characteristics of the Liberian setting can be used to develop a two-tier
modeling framework. Given the operating constraints and remote setting the authors
are able to model the problem as a special case of the location-routing problem
that is computationally simple to solve. The results of the models applied to
three districts of Liberia are discussed, as well as the collaborative process
of the multidisciplinary team. Originality/value Importantly, the authors describe
how the problem setting can enable the development of a properly scoped model
that is implementable in practice. Thus the authors provide a case study that
bridges the gap between theory and practice.
- Abstract Poor electrical conductivities, structural instabilities and long synthesis
procedures, limit the application of metal organic frameworks (MOFs) in energy
storage systems. In the present work, we synthesize a cobaltbenzene tricarboxylic
acid based MOF (CoBTC MOF) via two different approaches i. e. solvothermal route
and mechanochemical grinding for its utility in energy storage. When characterized
structurally and electrochemically, the CoBTC MOF synthesized by mechanochemical
method is found to be superior because of large surface area, enhanced porosity/diffusion
process through MOF and structural robustness along with less time requirement.
Further, its hybrid composite with graphene nanosheets (CoBTC MOF/GNS) was prepared
for its performance as a supercapacitor material. The characterization reveals
the formation of sandwich structure where CoBTC MOF rods (thickness ranging from
to m) are placed in between GNS. This arrangement has resulted into high specific
capacitance of F.g at current density of A.g in M KOH electrolyte along with excellent
capacitance retention up to % after charge/discharge cycles. Also, a symmetric
supercapacitor has been assembled for practical application of CoBTC MOF/GNS which
demonstrates specific capacitance of F.g with high energy density and power density
of Wh.kg and W.kg respectively, along with % retention of initial capacitance
after chargedischarge cycles.
- source_sentence: Patients with cancer are at increased risk of venous thromboembolism
(VTE). Risk assessment models can help identifying high-risk populations that
might benefit from primary thromboprophylaxis. Currently, the Khorana score is
suggested to select patients for primary thromboprophylaxis. However, risk stratification
with the Khorana-score remains imperfect, which led to the development of subsequent
clinical risk assessment models (PROTECHT-, CONKO-, ONKOTEV-, TiCat-, COMPASS-CAT-score).
Further, recently, a simplified, personalized risk prediction tool for cancer-associated
VTE, incorporating cancer type and D-Dimer levels has been proposed by Pabinger
et al. (CATSCORE). Also, novel models have been designed specifically for specific
tumour types, such as lung cancer (ROADMAP-CAT), gynaecological cancer (THROMBOGYN),
lymphoma (THROLY), or multiple myeloma (SAVED-; IMPEDE VTE-score). In the present
narrative review, we comprehensively summarize available data on currently available
risk assessment models for VTE in patients with cancer, provide a critical discussion
on their clinical utility, and give an outlook towards future developments.
sentences:
- Besides the cancer itself, venous thromboembolism (VTE) is the leading cause of
death in cancer patients receiving outpatient chemotherapy (CT). Data on VTE development
and impact on treatment course and outcome in real-life NSCLC patients receiving
immune check-point inhibitors (ICI) is currently sparse. More knowledge within
this area is warranted due to the emerging use of ICI in clinical practice. To
quantify risk of VTE and recurrent VTE in NSCLC patients receiving ICI. Explore
the clinical impact of VTE on ICI course and survival and explore potential risk
factors for VTE. Patients with advanced/metastatic NSCLC treated with an immune
checkpoint inhibitor (ICI) at the University Hospital of Odense, Denmark during
were identified and data gathered retrospectively from electronic medical records
(n = ). All patients had finished ICI at the time of data-cut off. Baseline Khorana
Score (KRS) was calculated within one week prior to ICI initiation. Based on follow-up
data cumulative incidence of VTE and its impact on outcome and survival was performed
using Kaplan Meier and cox-regression hazard estimation. Risk of VTE was % during
ICI and % at any time point after ICI initiation. Cumulative incidence rates of
VTE at , , and months after first ICI was %, %, % and % respectively. Median time
to VTE during ICI was months [IQR .0]. Having VTE during ICI lead to discontinuation
of ICI in % of cases, most due to fatal PE. History of VTE before onset of ICI
was a significant risk factor for recurrent VTE during ICI ( % within this subgroup)
despite use of anticoagulant therapy. The incidence and impact of VTE during ICI
for real-life NSCLC patients is not negligible with almost % developing VTE leading
to termination of further ICI in the majority of cases - many due to fatal PE.
The risk of recurrent anticoagulant resistant VTE in patients with known VTE during
ICI is also considerable, which calls for better management and prevention of
VTE including development of treatment specific VTE risk assessment models.
- In September , the New York Supreme Court, Second District, reversed a decision
made by the Division of Human Rights for a dentist to pay a patient in compensatory
damages. The agency ruled that disability-based discrimination is prohibited in
places of public accommodation. The state Supreme Court, however, found that dental
offices are not places of public accommodation as defined by the state human rights
law. The Division of Human Rights plans to appeal the ruling to the New York Court
of Appeals, citing case law which supports the proposition that private medical
offices are places of public accommodation.
- Hemoglobin concentrations in endometriotic cyst fluids have been found to be associated
with distinct clinical manifestations, such as pelvic pain and infertility, as
well as with malignant transformation. However, the measurement of the hemoglobin
concentration in cyst fluid is an invasive procedure. The present study aimed
to evaluate the usefulness of visible and nearinfrared interactance spectroscopy
as a noninvasive technique for estimating the hemoglobin concentration in endometriotic
cystic fluid. Optical fibers were directly placed onto sliced raw pork (up to
00mmthick as an anatomical barrier on the cyst's surface) that covers a cuvette
containing hemoglobin solution or endometriotic cyst fluid. Partial least square
regression based on the second derivative using visible and nearinfrared interactance
spectroscopy (wavelength region, nm) was used to estimate the hemoglobin concentration.
The samples were categorized into the evaluation sets (i.e., calibration set)
to create calibration curves and test sets (i.e., validation set) to validate
equations. The cyst fluid at mm of pork thickness achieved a high correlation
between actual and predicted hemoglobin concentrations (calibration (R0= ) and
validation (R0= ) data). However, the correlation slightly decreased at 00mm pork
thickness (i.e., calibration (R0= ) and validation (R0= ) data). Interactance
spectroscopy may thus be a noninvasive tool which can be used to estimate the
hemoglobin concentration in endometriotic cyst fluid when the anatomical barrier
is mm. This technology is a reliable modality for predicting the severity of dysmenorrhea
and infertility, as well as malignant transformation, in a number of patients
with endometriotic cysts. Such quantitative optical spectroscopic imaging technologies
may enable the accurate diagnosis of the pathological processes in endometriotic
cysts in clinical practice.
- source_sentence: Numerous industries provide investors with various funding options
in today's rapidly evolving business and technology landscape. One particularly
intriguing area in this regard is investment. Investment refers to allocating
cash into various assets for a specific duration to generate profits, such as
income or capital appreciation. Infrastructure development has led to the management
of several industries, including property and real estate. Property can stimulate
other economic sectors by providing employment opportunities and enhancing overall
societal well-being. This is further bolstered by the rapid growth of the property
sector, driven by the consistent availability of land and the rising public demand
for housing and office spaces. Based on the data results, it is evident that there
was an upsurge in demand for property and real estate in . In contrast, production
was sluggish expansion across all industries during the Covid- pandemic. Share
prices will rise with increased demand and fall with less demand. This is evident
in the company's effective management of shareholders. Financial reports are crucial
for the company's future. Financial report data can be utilized as a decisive
factor in decision-making. By assessing the financial performance of PT. Alam
Sutera Reality Tbk, PT. Bumi Serpong Tbk, and PT. Bekasi Fajar Industri Estate
Tbk, investors can make well-informed investment decisions. The liquid or illiquid
ratio, which is based on the company's debt-to-equity ratio, current ratio, net
profit margin, and total asset turnover, can be calculated to complete this assessment.
sentences:
- Fraud in accounting reporting is one of the factors that need to consider in presenting
quality financial reports. Based on the existing phenomena, this study investigates
accounting fraud that is suspected to be influenced by Good Corporate Governance
(GCG), compliance with accounting rules to present financial reports and information
asymmetry, and internal control. Testing the hypotheses secondary data from BUMN
listed on the Jakarta Stock Exchange is used to test the allegations. Testing
the hypothesis proposed using a quantitative approach with a sample of BUMNs listed
on the Jakarta Stock Exchange. The calculation results show that all the proposed
hypotheses are empirically proven. This condition indicates that accounting fraud
to be influenced by Good Corporate Governance (GCG), adherence to accounting rules
for the presentation of financial statements and information asymmetry, and internal
control.
- The number of multilingual signs in Japan was increasing rapidly; however, there
were still disputes over the information of signs, such as low recognition of
information and language selection, etc. In this case, this study was carried
out.BR The purpose of the study was to define benchmarks for foreigner-friendly
multilingual signs. Moreover, the possibility of how Chinese information was marked
in the multilingual signs of Japanese Tourist Attractions was explored.BR The
research contents and results were as follows. Firstly, the representative tourist
attractions in Tokyo were surveyed on the spot and photographed for record. Secondly,
the data from the fieldwork were organized into charts and graphs and analyzed
for multilingual markers. Thirdly, through interviews with H Tourism Association
in Tokyo, some issues with the signs of the current situation of scenic spots
were revealed. Fourthly, from the perspective of the characteristics of Chinese
language and the thinking method about Chinese characters, the field surveys and
interviews about the need for a large area of multilingual information marking
in signs were analyzed. The possibility of marking Chinese messages in signs of
Tourist Attractions in Japan was discussed.BR Guidance signs and induction signs
were more informative, and the information was generally presented in words rather
than sentences. If adopted together with non-verbal communication such as map
and diagram, the Chinese characters in the guidance signs and induction signs
of historical scenic spots with a high proportion of Chinese characters could
be omitted.BR So far, there have been many studies on the issue of multilingual
signs from the perspective of fonts and layout. What's more, from this new perspective
on language features, the issue of multilingual signs was explored in this study.
It was expected that the results of this research can be applied into practice
in practical projects.
- Dialect Recognition Systems (DRS) are systems that group dialects, according to
similar acoustic features found in dialect regions. The speaker's age, gender,
and dialect characteristics negatively affect the performance of speech recognition
systems. To handle dialect differences, dialect recognition systems can be integrated
into speech recognition systems. By determining the spoken dialect, the system
can be switched to the corresponding speech recognition model. There is no dataset
that can be used for Turkish automatic dialect recognition systems. In this study,
it is thought that this deficiency should be eliminated in some way. In addition,
an experimental study has been carried out to classify the generated data set
by convolutional neural networks. The resulting % accuracy is satisfactory.
- source_sentence: The social sciences have long shown that health is not born of
pure biology, empirically (re)centred the social and material causes of disease,
and affirmed the subjective experiences of disease. Disputed both in popular and
academic discourses, social health has variously attempted to stress the social
aspects of health. Existing conceptions remain analytically limited as they are
predominantly used as descriptors for populational health. This article theorises
social health as an analytical lens for making sense of the relations, affects
and events where health unfolds and comes into expression. Drawing on social practice
theory, feminist care ethics and posthumanism this conceptual paper re-imagines
how social health might be conceived as lived social practices anchored in care.
Care within our framework acknowledges the unavoidable interdependency foundational
to the existence of beings and stresses the 'know how' and embodied practices
of care in the mundane in order to emphasise that care itself is absolutely integral
to the maintenance of social health. The article argues that health needs to be
understood as a verb intrinsically (re)made in and through social contexts and
structures and comprised of meaningful, human-human and human-non-human interactions.
Ultimately, in theorising social health through mundane care practices, we hope
to open up research to making sense of how the doing of health unfolds inside
often banal, patterned forms of social activity. Such taken-for-granted social
practices exemplify the often overlooked lived realities that comprise our health.
To understand health in its own right, we argue, these everyday practices need
to be interrogated.
sentences:
- This paper proposes a methodology to create an interpretable fuzzy model for monthly
rainfall time series prediction. The proposed methodology incorporates the advantages
of artificial neural network, fuzzy logic and genetic algorithm. In the first
step, the differences between the time series data are calculated and they are
used to define the interval between the membership functions of a Mamdani-type
fuzzy inference system. Next, artificial neural network is used to develop the
model from input-output data and the established model is then used to extract
the fuzzy rules. The parameters of the created fuzzy model are then optimized
by using genetic algorithm. The proposed model was applied to eight monthly rainfall
time series data in the northeast region of Thailand. The experimental results
showed that the proposed model provided satisfactory prediction accuracy when
compared to other commonly-used prediction models. Due to the interpretability
nature of the model, human analysts can gain insight knowledge of the data to
be modeled.
- A dB dynamic range and cm spatial resolution tunable photon-counting optical time-domain
reflectometer (PC-OTDR) is presented along with a Field Programmable Gate Array
(FPGA)-based detection management system that allows several regions of the fiber
to be interrogated by the same optical pulse, increasing the data acquisition
rate when compared to previous solutions. The optical pulse generation is implemented
by a tunable figure- passive mode-locked laser providing pulses with the desired
bandwidth and center wavelength for WDM applications in the C-band. The acquisition
rate is limited by the afterpulse effect and dead time of the employed gated avalanche
single-photon detectors. The devised acquisition system not only allows for centimeter-resolution
monitoring of fiber links as long as km in under minutes but is also readily adapted
to any other photon-counting strategy for increased acquisition rate. The system
provides a -fold decrease in acquisition times when compared with state-of-the-art
solutions, allowing affordable times in centimeter-resolution long-distance fiber
measurements.
- Care has been theorised in relationship to eating disorders as a central consideration
across diagnoses. In the context of avoidant restrictive food intake disorder
(ARFID) specifically, there is room to further develop the nuances around layers
of care involved in working towards well-being. In this paper, we engage with
the stories of caregivers of people with ARFID, exploring their pathways to care
(or lack thereof) through the healthcare system in Aotearoa New Zealand. We explore
the material, affective and relational aspects of care and care-seeking, engaging
with the power and politics of care as it flows through care-seeking assemblages.
Using postqualitative methods of analysis, we discuss how while participants were
seeking care, they received (or, at times, did not receive) treatment, and unpack
how care and treatment are not always synonymous. We work up extracts from parents'
stories surrounding their caring for their children and how their actions were,
at times, interpreted in ways that made them feel blame and shame rather than
care. Participants' stories also offer glimmers of care within a resource-strapped
healthcare system, which invite us to consider the potentiality of a relational
ethics of care as an assemblage-shifting moment.
model-index:
- name: SentenceTransformer based on answerdotai/ModernBERT-large
results:
- task:
type: triplet
name: Triplet
dataset:
name: modernBERT
type: modernBERT
metrics:
- type: cosine_accuracy
value: 0.9846547314578005
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: modernBERT disciplines
type: modernBERT_disciplines
metrics:
- type: cosine_accuracy
value: 0.9789272030651341
name: Cosine Accuracy
---
# SentenceTransformer based on answerdotai/ModernBERT-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) <!-- at revision e829787a68677321312ff287fda2f8ef1a36e02a -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("m7n/discipline-bert-modern-large_v02")
# Run inference
sentences = [
"The social sciences have long shown that health is not born of pure biology, empirically (re)centred the social and material causes of disease, and affirmed the subjective experiences of disease. Disputed both in popular and academic discourses, social health has variously attempted to stress the social aspects of health. Existing conceptions remain analytically limited as they are predominantly used as descriptors for populational health. This article theorises social health as an analytical lens for making sense of the relations, affects and events where health unfolds and comes into expression. Drawing on social practice theory, feminist care ethics and posthumanism this conceptual paper re-imagines how social health might be conceived as lived social practices anchored in care. Care within our framework acknowledges the unavoidable interdependency foundational to the existence of beings and stresses the 'know how' and embodied practices of care in the mundane in order to emphasise that care itself is absolutely integral to the maintenance of social health. The article argues that health needs to be understood as a verb intrinsically (re)made in and through social contexts and structures and comprised of meaningful, human-human and human-non-human interactions. Ultimately, in theorising social health through mundane care practices, we hope to open up research to making sense of how the doing of health unfolds inside often banal, patterned forms of social activity. Such taken-for-granted social practices exemplify the often overlooked lived realities that comprise our health. To understand health in its own right, we argue, these everyday practices need to be interrogated.",
"Care has been theorised in relationship to eating disorders as a central consideration across diagnoses. In the context of avoidant restrictive food intake disorder (ARFID) specifically, there is room to further develop the nuances around layers of care involved in working towards well-being. In this paper, we engage with the stories of caregivers of people with ARFID, exploring their pathways to care (or lack thereof) through the healthcare system in Aotearoa New Zealand. We explore the material, affective and relational aspects of care and care-seeking, engaging with the power and politics of care as it flows through care-seeking assemblages. Using postqualitative methods of analysis, we discuss how while participants were seeking care, they received (or, at times, did not receive) treatment, and unpack how care and treatment are not always synonymous. We work up extracts from parents' stories surrounding their caring for their children and how their actions were, at times, interpreted in ways that made them feel blame and shame rather than care. Participants' stories also offer glimmers of care within a resource-strapped healthcare system, which invite us to consider the potentiality of a relational ethics of care as an assemblage-shifting moment.",
'A dB dynamic range and cm spatial resolution tunable photon-counting optical time-domain reflectometer (PC-OTDR) is presented along with a Field Programmable Gate Array (FPGA)-based detection management system that allows several regions of the fiber to be interrogated by the same optical pulse, increasing the data acquisition rate when compared to previous solutions. The optical pulse generation is implemented by a tunable figure- passive mode-locked laser providing pulses with the desired bandwidth and center wavelength for WDM applications in the C-band. The acquisition rate is limited by the afterpulse effect and dead time of the employed gated avalanche single-photon detectors. The devised acquisition system not only allows for centimeter-resolution monitoring of fiber links as long as km in under minutes but is also readily adapted to any other photon-counting strategy for increased acquisition rate. The system provides a -fold decrease in acquisition times when compared with state-of-the-art solutions, allowing affordable times in centimeter-resolution long-distance fiber measurements.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `modernBERT` and `modernBERT_disciplines`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | modernBERT | modernBERT_disciplines |
|:--------------------|:-----------|:-----------------------|
| **cosine_accuracy** | **0.9847** | **0.9789** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 7,828 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 86 tokens</li><li>mean: 240.32 tokens</li><li>max: 633 tokens</li></ul> | <ul><li>min: 84 tokens</li><li>mean: 243.66 tokens</li><li>max: 668 tokens</li></ul> | <ul><li>min: 88 tokens</li><li>mean: 237.15 tokens</li><li>max: 681 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Flash memory devices are investigated to confirm their application as physically unclonable functions (PUFs). Inherent fluctuations in the characteristics of flash memory devices, even with identical fabrication processes, produce different outputs, which are useful for device fingerprints. A difference in programming/erasing efficiency arises from a widely distributed threshold voltage. However, statistical fluctuations in the threshold voltage represent an advantage for PUF applications. The characteristics of PUFs, such as their unclonability, uncontrollability, unpredictability, and robustness, are investigated using fabricated flash memory devices. A simulation study is performed to support the experimental results and to show that the unpredictability is induced by variations in the gate dielectric thickness.</code> | <code>Ternary Content Addressable Memory (TCAM) is used in applications that require a low power dissipation and fast data retrieval. This paper presents a domain wall-based spintronic TCAM cell. The proposed design exploits the resistive behavior of this nonvolatile memory, reduces total power dissipation by reducing the voltage swing at the match line, and minimizes delay by employing a tiny sensing unit within each cell. Our experimental evaluation on nm technology for a -bit word-size TCAM at an V supply voltage and mV sense margin show that the delay is less than ps. The per-bit search energy is approximately fJ. Experimental evaluation on benchmark applications on the AMD Southern Islands GPU reveal that the GPU always dissipates less power when enhanced with the proposed TCAM design. Furthermore, the proposed method consumes at least % less energy when compared to state-of-the-art TCAM designs.</code> | <code>Abstract. The main focus of the paper is to present a flood and landslide early warning system, named HEWS (Hydrohazards Early Warning System), specifically developed for the Civil Protection Department of Sicily, based on the combined use of rainfall thresholds, soil moisture modelling and quantitative precipitation forecast (QPF). The warning system is referred to different Alert Zones in which Sicily has been divided into and based on a threshold system of three different increasing critical levels: ordinary, moderate and high. In this system, for early flood warning, a Soil Moisture Accounting (SMA) model provides daily soil moisture conditions, which allow to select a specific set of three rainfall thresholds, one for each critical level considered, to be used for issue the alert bulletin. Wetness indexes, representative of the soil moisture conditions of a catchment, are calculated using a simple, spatially-lumped rainfallstreamflow model, based on the SCS-CN method, and on the u...</code> |
| <code>A new method for the determination of trace levels of bromates by selective membrane collection is presented. Various membranes containing a few micrograms of different complexing reagents in a poly(vinyl chloride) matrix were tested. These membranes were produced on the surface of quartz glass (reflectors), and they were immersed in solutions containing bromate and bromide ions. At the first stage the prepared membranes collected both bromate and bromide ions, so different bromide masking agents were put in the analyzed solutions to avoid bromide collection. By the end of the equilibration time, the reflectors were left to dry, and they were analyzed by total reflection X-ray fluorescence (TXRF). The poly(vinyl chloride) with aliquat- membrane and o-dianisidin complexing agent gave the best results. The minimum detection limit was equal to ng/mL for ultrapure water and ng/mL for drinking water.</code> | <code>ADVERTISEMENT RETURN TO ISSUEPREVArticleNEXTVoltammetric anion responsive sensors based on modulation of ion permeability through Langmuir-Blodgett films containing synthetic anion receptorsShinobu. Nagase, Masamitsu. Kataoka, Ryuichi. Naganawa, Ryoko. Komatsu, Kazunori. Odashima, and Yoshio. UmezawaCite this: Anal. Chem. , , , 00000000Publication Date (Print):July , 0000Publication History Published online0 May 0000Published inissue July 0000https://pubs.acs.org/doi/ /ac00000a000https://doi.org/ /ac00000a000research-articleACS PublicationsRequest reuse permissionsArticle Views000Altmetric-Citations00LEARN ABOUT THESE METRICSArticle Views are the COUNTER-compliant sum of full text article downloads since November (both PDF and HTML) across all institutions and individuals. These metrics are regularly updated to reflect usage leading up to the last few days.Citations are the number of other articles citing this article, calculated by Crossref and updated daily. Find more information abo...</code> | <code>This study investigated whether performance of an interceptive skill requires an intact visual-perception-action cycle. Eleven skilled male Australian rules football athletes (M age = , SD = ) were recruited from an elite developmental pathway squad for a within-subject study. Participants were required to kick a ball directly at a goal from a -meter distance while wearing a pair of stroboscopic glasses. The glasses were used to create four vision conditions. Condition one kept intact the visual-perception-action cycle with uninterrupted vision of the motor skill. Three other conditions included stroboscopic vision that presented temporal samples of vision, which interrupted the perception-action cycle through progressive increases to intermittent vision occlusion of the motor skill. Goal kick error of ball position relative to a central target line within the goal and number of successful goals kicked were measured. Written report of internal and external focus of attention was also m...</code> |
| <code>The study aimed to determine the effectiveness of Contextual Teaching and Learning (CTL) in reducing and improving learning outcomes and math anxiety among students at a private elementary school in Indonesia. The research utilized a one-group control pre-posttest design with a sample of 0th-grade students. The study used a combination of pre-test and post-test and a closed-ended questionnaire as the data collection instruments. The independent variable in the study was CTL, while the dependent variables were learning outcomes and math anxiety. The paired t-test showed a significant increase in the students' average learning outcomes and a decrease in the average math anxiety levels. The findings suggest that implementing CTL is a practical approach to reducing math anxiety and improving student learning outcomes.</code> | <code>This study aims to determine the problem-solving ability of field independent (FI) and field dependent (FD) students in solving HOTS story problems. This type of research is qualitative research. The research strategy used is a descriptive model. This research was carried out at a junior school in Malang, Indonesia. The respondent was tenth-grade students. Data collection methods in this study include tests and interviews. Data analysis techniques include data collection, reduction, presentation, and concluding. The results of this study show that FI and FD students understand the problem. There is no difference between the two; FI and FD students are good at understanding the problem. FI students plan solutions well and can correctly create mathematical models, while FD students have difficulty developing mathematical models. In getting answers, FI and FD students have something in common: they are not quite right in the final solution.</code> | <code>The recently proposed recursive least-squares (RLS) algorithm for trilinear forms, namely RLS-TF, was designed for the identification of third-order tensors of rank one. In this context, a high-dimension system identification problem can be efficiently addressed (gaining in terms of both performance and complexity) based on tensor decompositions and modelling. In this paper, following the framework of the RLS-TF, we propose a regularized version of this algorithm, where the regularization terms are incorporated within the cost functions. Furthermore, the optimal regularization parameters are derived, aiming at attenuating the effects of the system noise. Simulation results support the performance features of the proposed algorithm, especially in terms of its robustness in noisy environments.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.COSINE",
"triplet_margin": 0.05
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 391 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 391 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 85 tokens</li><li>mean: 237.84 tokens</li><li>max: 629 tokens</li></ul> | <ul><li>min: 93 tokens</li><li>mean: 239.31 tokens</li><li>max: 610 tokens</li></ul> | <ul><li>min: 83 tokens</li><li>mean: 234.79 tokens</li><li>max: 499 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The aim of the study was to determine the relationship between emotional intelligence and cohesion in a sports team of girls engaged in synchronized figure skating. The following psychological tests were used in the study: the Emotional Intelligence test by D.V. Lyusin, a test to determine the index of group cohesion of the Sisor. The study was conducted on the basis of the sports school "Yunost" in Yekaterinburg. Two teams of different age groups took part in the experiment: athletes performing in the category of "novices" ( years old), girls performing in the team of "CMS" ( - years old). Testing was conducted twice: at the beginning of the season and after the competitive season. The study revealed positive dynamics of the development of cohesion in both teams. It also revealed reliable relationships between interpersonal emotional intelligence and the level of cohesion in the team. Further research may be aimed at developing a strategy to increase emotional intelligence as a factor...</code> | <code>Recreational swimming can be used as a reliable preventive measure for those diseases that are widespread among students. The purpose of the research is to study the effect of swimming on the functional state of students. The study involved male students who are selfemployed in swimming and male students who are professionally engaged in the swimming section. The research methods used samples of Martinet-Kushelevsky, Rufier, Stange and Genchi, as well as chest excursions. It was revealed that students, who practice swimming in the section, have more favorable conditions for a comprehensive effect on the body than students who swim independently, due to a greater load during training and their systematic nature.</code> | <code>Mg Ni + x% Ti . Mn . V . ( x = ,00,and ) composites were prepared by hydriding combustion synthesis( HCS) and the HCS products were mechanically milled( MM) to obtain Mg-based hydrogen-storage composites. The dehydriding properties,phase structure,surface morphology,and particle composition were studied by pressure-composition-temperature( pcT),X-ray diffraction( XRD) and scanning electron microscopy( SEM). Results showed that addition of %( mass fraction) Ti . Mn . V . exhibited the best desorption property for the HCS + MM product of Mg Ni , which could completely desorb . % H in s at K. The apparent dehydrogenation activation energy of the system was decreased to . kJ / mol from . kJ / mol of Mg Ni . The improvement of the desorption property could be attributed to the enhancement of diffusion and the hydrogen pumpingof Ti . Mn . V . .</code> |
| <code>This article has been retracted: please see Elsevier Policy on Article Withdrawal ( ). This article has been retracted at the request of authors due to scientific errors reported by authors. The author reported errors are: : In the " Case Description" section, Fig. A0 (wind and PV output power) is the input data for the simulation calculation. The authors report that, due to an oversight, they did not use real wind and PV output power data, which would lead to inaccurate results for the system simulation calculations. : For the " Model solving algorithm", the authors found that it is incorrect to use the properties of Gaussian functions to improve the CDE algorithm because Gaussian functions do not have the properties of concave functions. This is evidenced in the literature "DOI: : Fig. (Iterative Convergence Curve of Rastrigin Function) is tested using the benchmark test function (Rastrigin function) in order to demonstrate the feasibility of the GCDE algorithm. However, it is clear ...</code> | <code>Energy accessibility especially electrical energy is considered as one of the most appealing factors to achieve energy sustainability. The purpose of this study is to investigate energy sustainability using renewable energies for two high potential cities in the south-east of Iran until the year . In this regard, Homer software is used to evaluate economic and technical analyses of PV-wind-diesel hybrid system for the two cities by the data gathering which was collected from Iran's meteorological organization. Therefore, the average of solar radiation per month for Zabol and Zahak were about and (h/d). Also, mean wind speeds are calculated m/s and m/s for Zabol and Zahak respectively which proposed that these cities have high potential in order to electrical production by a hybrid system. Furthermore, the amount of electricity production by PV array for Zabol and Zahak were (kWh/yr) and (kWh/yr) respectively, and the amount of electricity production by wind turbine were (kWh/yr) and (k...</code> | <code>The philosophy that built by German Idealism is obtained and never neglected religion, this is not about the religious dogmas or the fantasy and legendary nature of religion, but it is about the spirit and the crux of religion. Nevertheless, there is always struggled to deprive it from fantasies and rebuilt by philosophical ideas. These ideal philosophers are asserted to reconstruct the stories and imaginary schemes of religion into philosophical and rational thinking. There is a change in the result of this process which is religion is retreated and the metaphysics is slightly appeared. In other word, this change is directed from revelation to metaphysical views. In the light of this, the German Idealism is taking two different ways toward religion: the negative direction; which is involved to the critical studies of the basis and construction of religion, and the positive direction; this direction is returned to religion, but this return is happened after reconstruct religion by the ...</code> |
| <code>In this paper measurements of momentum and current transport caused by current driven tearing instability are reported. The measurements are done in the Madison Symmetric Torus reversed-field pinch [R. N. Dexter, D. W. Kerst, T. W. Lovell, S. C. Prager, and J. C. Sprott, Fusion Technol. , ( )] in a regime with repetitive bursts of tearing instability causing magnetic field reconnection. It is established that the plasma parallel momentum profile flattens during these reconnection events: The flow decreases in the core and increases at the edge. The momentum relaxation phenomenon is similar in nature to the well established relaxation of the parallel electrical current and could be a general feature of self-organized systems. The measured fluctuation-induced Maxwell and Reynolds stresses, which govern the dynamics of plasma flow, are large and almost balance each other such that their difference is approximately equal to the rate of change of plasma momentum. The Hall dynamo, which is d...</code> | <code>We present measurements of magnetic fields generated in laser-driven coil targets irradiated by laser pulses of nanosecond duration, m wavelength, J energy, and W/cm0 intensity, at the LULI0000 facility. Using two perpendicular probing axes, proton deflectometry is used to characterize the coil current and static charge at different times. Results reveal various deflection features that can be unambiguously linked to a looping quasi-steady current of well-understood polarity or to a static charging of the coil surface. Measured currents are broadly consistent with predictions from a laser-driven diode-current source and lumped circuit model, supporting the quasi-steady assessment of the discharges. Peak magnetic fields of T at the center of -m-diameter coils, obtained at the moderate laser intensity, open up the use of such laser-driven coil targets at facilities worldwide to study numerous phenomena in magnetized high-energy-density plasmas, and its potential applications.</code> | <code>EU , , , , , . . . , . . , . , - -EU . .In August , the UK launched a new export strategy to increase UK total exports as a proportion of gross domestic product (GDP) to % and to build trading relationships around the world after Brexit. And the government aims to strengthen UK's position as one of the 00st century's great trading nations and to expand the export of traders by setting the five principle. These principles are a business-led approach, doing what only government can do, joining up across government with local partners and the private sector, digital by design and value for money. This paper examines the background, purpose and main contents of the UK new export strategy in UK and the countermeasures for the new UK export strategy. First of all, we should prepare a scenarios based on directions of Brexit. Second, it is necessary to discuss the redefinition of relationship with Korea-UK and Korea-EU. And finally, Korean companies should enter the UK by utilizing the e-comme...</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.COSINE",
"triplet_margin": 0.05
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | modernBERT_cosine_accuracy | modernBERT_disciplines_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:--------------------------:|:--------------------------------------:|
| 0 | 0 | - | - | 0.8951 | - |
| 0.0511 | 100 | 0.0064 | 0.0049 | 0.9616 | - |
| 0.1022 | 200 | 0.002 | 0.0071 | 0.9565 | - |
| 0.1533 | 300 | 0.0076 | 0.0034 | 0.9795 | - |
| 0.2044 | 400 | 0.0074 | 0.0039 | 0.9668 | - |
| 0.2555 | 500 | 0.0036 | 0.0036 | 0.9693 | - |
| 0.3066 | 600 | 0.0035 | 0.0029 | 0.9770 | - |
| 0.3577 | 700 | 0.004 | 0.0035 | 0.9693 | - |
| 0.4088 | 800 | 0.0027 | 0.0034 | 0.9770 | - |
| 0.4599 | 900 | 0.0044 | 0.0032 | 0.9719 | - |
| 0.5110 | 1000 | 0.0037 | 0.0053 | 0.9565 | - |
| 0.5621 | 1100 | 0.0048 | 0.0029 | 0.9795 | - |
| 0.6132 | 1200 | 0.0032 | 0.0031 | 0.9744 | - |
| 0.6643 | 1300 | 0.0023 | 0.0036 | 0.9744 | - |
| 0.7154 | 1400 | 0.0044 | 0.0029 | 0.9821 | - |
| 0.7665 | 1500 | 0.0022 | 0.0032 | 0.9795 | - |
| 0.8176 | 1600 | 0.0036 | 0.0034 | 0.9770 | - |
| 0.8687 | 1700 | 0.0022 | 0.0031 | 0.9821 | - |
| 0.9198 | 1800 | 0.0028 | 0.0025 | 0.9821 | - |
| 0.9709 | 1900 | 0.0054 | 0.0025 | 0.9821 | - |
| 1.0220 | 2000 | 0.003 | 0.0029 | 0.9770 | - |
| 1.0731 | 2100 | 0.0018 | 0.0026 | 0.9795 | - |
| 1.1242 | 2200 | 0.0021 | 0.0024 | 0.9847 | - |
| 1.1753 | 2300 | 0.0015 | - | - | 0.9789 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0.dev0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"PCR"
] | Non_BioNLP |
seonghyeonye/direct_3B | seonghyeonye | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"arxiv:2210.02969",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,664 | 1,665 | 12 | 1 | ---
datasets:
- bigscience/P3
language: en
license: apache-2.0
widget:
- text: A is the son's of B's uncle. What is the family relationship between A and
B?
- text: 'Reorder the words in this sentence: justin and name bieber years is my am
I 27 old.'
- text: "Task: copy but say the opposite.\n PSG won its match against Barca."
- text: 'Is this review positive or negative? Review: Best cast iron skillet you will
every buy.'
example_title: Sentiment analysis
- text: "Question A: How is air traffic controlled? \nQuestion B: How do you become\
\ an air traffic controller?\nPick one: these questions are duplicates or not\
\ duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday.\
\ He chose her because she had foreign affairs experience as a former First Lady.\
\ \nIn the previous sentence, decide who 'her' is referring to."
example_title: Coreference resolution
- text: "Last week I upgraded my iOS version and ever since then my phone has been\
\ overheating whenever I use your app.\n Select the category for the above sentence\
\ from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach\
\ was carrying 38 passengers.\n Sentence 2: The head of the local disaster unit,\
\ Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n Do sentences\
\ 1 and 2 have the same meaning?"
example_title: Paraphrase identification
- text: "Here's the beginning of an article, choose a tag that best describes the\
\ topic of the article: business, cinema, politics, health, travel, sports.\n\n\
\ The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n (CNN)\
\ Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds.\
\ For a Cold War creation, Ian Fleming's suave spy has certainly gotten around,\
\ but despite different guises in the tuxedo and occasional scuba gear, when it\
\ comes to Bond ratings, there really shouldn't be much argument about who wore\
\ it best."
- text: "Max: Know any good websites to buy clothes from?\n Payton: Sure :) LINK 1,\
\ LINK 2, LINK 3\n Max: That's a lot of them!\n Payton: Yeah, but they have different\
\ things so I usually buy things from 2 or 3 of them.\n Max: I'll check them out.\
\ Thanks.\n\n Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\
\n Sentence A: you can leave the books on the table over there.\n Sentence B:\
\ the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book,\
\ a blue book, and a black book.\n The red book is to the right of the gray book.\
\ The black book is to the left of the blue book. The blue book is to the left\
\ of the gray book. The purple book is the second from the right.\n\n Which book\
\ is the leftmost book?"
example_title: Logic puzzles
- text: "The two men running to become New York City's next mayor will face off in\
\ their first debate Wednesday night.\n\n Democrat Eric Adams, the Brooklyn Borough\
\ president and a former New York City police captain, is widely expected to win\
\ the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era\
\ Guardian Angels anti-crime patril.\n\n Who are the men running for mayor?"
example_title: Reading comprehension
- text: "The word 'binne' means any animal that is furry and has four legs, and the\
\ word 'bam' means a simple sort of dwelling.\n\n Which of the following best\
\ characterizes binne bams?\n - Sentence 1: Binne bams are for pets.\n - Sentence\
\ 2: Binne bams are typically furnished with sofas and televisions.\n - Sentence\
\ 3: Binne bams are luxurious apartments.\n - Sentence 4: Binne bams are places\
\ where people live."
---
**Official repository**: [seonghyeonye/Flipped-Learning](https://github.com/seonghyeonye/Flipped-Learning)
# Model Description
DIRECT is a strong baseline of FLIPPED, based on the training objective on [T0-3B](https://huggingface.co/bigscience/T0_3B).
With only 5% token updates and half of training datasets compared to T0-3B, DIRECT outperforms T0-3B. (+6.38% mean accuracy on 14 NLP tasks, +1.19% mean accuracy on 14 BIG-bench tasks)
# How to use
Our overall explanation models along with ablations can be found in our [paper](https://arxiv.org/abs/2210.02969). We recommend using the [FLIPPED-11B](seonghyeonye/flipped_11B) checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[Flipped_11B](https://huggingface.co/seonghyeonye/flipped_11B)|11 billion|
|[Flipped_3B](https://huggingface.co/seonghyeonye/flipped_3B)|3 billion|
Here is how to download the model in PyTorch:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/direct_3B")
tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/direct_3B")
```
If you want to use another checkpoint, please replace the path in `T5Tokenizer` and `T5ForConditionalGeneration`.
We also provide a quick [Jupyter Notebook](https://github.com/seonghyeonye/Flipped-Learning/blob/master/flipped_inference.ipynb) where you can inference with our method.
**Note: the model was trained with fp32 activations. As such, we highly discourage running inference with fp16.**
# Training procedure
DIRECT model is based on [T5+LM](https://huggingface.co/google/t5-xl-lm-adapt), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective additionally pretrained on language modeling objective on [C4](https://huggingface.co/datasets/c4).
Training details:
- Fine-tuning steps: 5'000
- Input sequence length: 512
- Target sequence length: 128
- Batch size: 240
- Optimizer: Adafactor
- Learning rate: 1e-4
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears *num_examples/num_templates* while training.)
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|FLIPPED_11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|FLIPPED_3B|Same as FLIPPED-11B|
|DIRECT_3B|Same as FLIPPED-11B|
We only choose prompts examples that has output lables, which can be found on the dataset page.
# Evaluation data
We evaluate our models on following datasets:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI(R1, R2, R3), CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
|QA|PIQA, ARC-Challenge, OpenbookQA|
We also evaluate FLIPPED on a subset of [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Label generalization
We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our [paper](https://arxiv.org/abs/2210.02969).
|Task category|(Datasets, Template name)|
|-|-|
|Unseen tasks|(WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource)|
|Seen tasks|(IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning) |
The template name we used can be found in the [promptsource template library](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource/templates).
# BibTeX entry and citation info
```bibtex
@article{ye2022guess,
title={Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners},
author={Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon},
journal={arXiv preprint arXiv:2210.02969},
year={2022}
}
``` | [
"COREFERENCE_RESOLUTION"
] | [
"SCIQ"
] | Non_BioNLP |
nomic-ai/nomic-embed-text-v1-ablated | nomic-ai | sentence-similarity | [
"sentence-transformers",
"pytorch",
"onnx",
"nomic_bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"arxiv:2402.01613",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,705 | 1,722 | 335 | 4 | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: epoch_0_model
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.67164179104476
- type: ap
value: 42.7379383648841
- type: f1
value: 72.79997373883408
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.413775
- type: ap
value: 87.08812293673202
- type: f1
value: 90.39246586225426
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.80799999999999
- type: f1
value: 47.25679462673503
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.37
- type: map_at_10
value: 45.748
- type: map_at_100
value: 46.617
- type: map_at_1000
value: 46.622
- type: map_at_3
value: 40.564
- type: map_at_5
value: 43.69
- type: mrr_at_1
value: 30.868000000000002
- type: mrr_at_10
value: 45.905
- type: mrr_at_100
value: 46.787
- type: mrr_at_1000
value: 46.792
- type: mrr_at_3
value: 40.717999999999996
- type: mrr_at_5
value: 43.851
- type: ndcg_at_1
value: 30.37
- type: ndcg_at_10
value: 54.662
- type: ndcg_at_100
value: 58.23700000000001
- type: ndcg_at_1000
value: 58.373
- type: ndcg_at_3
value: 44.069
- type: ndcg_at_5
value: 49.728
- type: precision_at_1
value: 30.37
- type: precision_at_10
value: 8.321000000000002
- type: precision_at_100
value: 0.985
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.089
- type: precision_at_5
value: 13.613
- type: recall_at_1
value: 30.37
- type: recall_at_10
value: 83.21499999999999
- type: recall_at_100
value: 98.506
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 54.266999999999996
- type: recall_at_5
value: 68.065
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.85329429748079
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.12666783330692
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 57.58783867794241
- type: mrr
value: 71.84078617596622
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.92453139507079
- type: cos_sim_spearman
value: 85.37122234964886
- type: euclidean_pearson
value: 86.19345621799168
- type: euclidean_spearman
value: 85.37122234964886
- type: manhattan_pearson
value: 86.4685290616604
- type: manhattan_spearman
value: 85.91400580167537
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 83.81818181818181
- type: f1
value: 83.76155217378863
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.46362764203256
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 33.13807021168658
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.725
- type: map_at_10
value: 39.654
- type: map_at_100
value: 41.022
- type: map_at_1000
value: 41.144999999999996
- type: map_at_3
value: 36.819
- type: map_at_5
value: 38.376
- type: mrr_at_1
value: 36.195
- type: mrr_at_10
value: 45.171
- type: mrr_at_100
value: 45.987
- type: mrr_at_1000
value: 46.033
- type: mrr_at_3
value: 43.038
- type: mrr_at_5
value: 44.196000000000005
- type: ndcg_at_1
value: 36.195
- type: ndcg_at_10
value: 45.194
- type: ndcg_at_100
value: 50.516000000000005
- type: ndcg_at_1000
value: 52.739000000000004
- type: ndcg_at_3
value: 41.142
- type: ndcg_at_5
value: 42.973
- type: precision_at_1
value: 36.195
- type: precision_at_10
value: 8.312
- type: precision_at_100
value: 1.346
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 19.599
- type: precision_at_5
value: 13.847999999999999
- type: recall_at_1
value: 29.725
- type: recall_at_10
value: 55.51199999999999
- type: recall_at_100
value: 78.182
- type: recall_at_1000
value: 92.727
- type: recall_at_3
value: 43.287
- type: recall_at_5
value: 48.732
- type: map_at_1
value: 30.23
- type: map_at_10
value: 40.091
- type: map_at_100
value: 41.251
- type: map_at_1000
value: 41.384
- type: map_at_3
value: 37.247
- type: map_at_5
value: 38.865
- type: mrr_at_1
value: 38.279999999999994
- type: mrr_at_10
value: 46.288000000000004
- type: mrr_at_100
value: 47.022999999999996
- type: mrr_at_1000
value: 47.068
- type: mrr_at_3
value: 44.395
- type: mrr_at_5
value: 45.446
- type: ndcg_at_1
value: 38.279999999999994
- type: ndcg_at_10
value: 45.647
- type: ndcg_at_100
value: 49.851
- type: ndcg_at_1000
value: 51.991
- type: ndcg_at_3
value: 41.795
- type: ndcg_at_5
value: 43.578
- type: precision_at_1
value: 38.279999999999994
- type: precision_at_10
value: 8.522
- type: precision_at_100
value: 1.361
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 20.297
- type: precision_at_5
value: 14.255
- type: recall_at_1
value: 30.23
- type: recall_at_10
value: 55.094
- type: recall_at_100
value: 72.887
- type: recall_at_1000
value: 86.295
- type: recall_at_3
value: 43.244
- type: recall_at_5
value: 48.507
- type: map_at_1
value: 40.854
- type: map_at_10
value: 52.232
- type: map_at_100
value: 53.129000000000005
- type: map_at_1000
value: 53.185
- type: map_at_3
value: 49.094
- type: map_at_5
value: 50.834999999999994
- type: mrr_at_1
value: 46.708
- type: mrr_at_10
value: 56.021
- type: mrr_at_100
value: 56.584
- type: mrr_at_1000
value: 56.611999999999995
- type: mrr_at_3
value: 53.657
- type: mrr_at_5
value: 55.027
- type: ndcg_at_1
value: 46.708
- type: ndcg_at_10
value: 57.89
- type: ndcg_at_100
value: 61.541999999999994
- type: ndcg_at_1000
value: 62.754
- type: ndcg_at_3
value: 52.632
- type: ndcg_at_5
value: 55.104
- type: precision_at_1
value: 46.708
- type: precision_at_10
value: 9.122
- type: precision_at_100
value: 1.187
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.072
- type: precision_at_5
value: 15.661
- type: recall_at_1
value: 40.854
- type: recall_at_10
value: 70.98
- type: recall_at_100
value: 86.947
- type: recall_at_1000
value: 95.62
- type: recall_at_3
value: 56.782999999999994
- type: recall_at_5
value: 62.980000000000004
- type: map_at_1
value: 26.366
- type: map_at_10
value: 33.674
- type: map_at_100
value: 34.58
- type: map_at_1000
value: 34.662
- type: map_at_3
value: 31.596999999999998
- type: map_at_5
value: 32.596000000000004
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 35.912
- type: mrr_at_100
value: 36.696
- type: mrr_at_1000
value: 36.760999999999996
- type: mrr_at_3
value: 33.823
- type: mrr_at_5
value: 34.829
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 38.031
- type: ndcg_at_100
value: 42.678
- type: ndcg_at_1000
value: 44.871
- type: ndcg_at_3
value: 33.815
- type: ndcg_at_5
value: 35.531
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 5.638
- type: precision_at_100
value: 0.8380000000000001
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 13.974
- type: precision_at_5
value: 9.401
- type: recall_at_1
value: 26.366
- type: recall_at_10
value: 49.353
- type: recall_at_100
value: 71.194
- type: recall_at_1000
value: 87.842
- type: recall_at_3
value: 37.829
- type: recall_at_5
value: 41.976
- type: map_at_1
value: 16.634
- type: map_at_10
value: 23.271
- type: map_at_100
value: 24.366
- type: map_at_1000
value: 24.484
- type: map_at_3
value: 21.075
- type: map_at_5
value: 22.364
- type: mrr_at_1
value: 20.522000000000002
- type: mrr_at_10
value: 27.735
- type: mrr_at_100
value: 28.691
- type: mrr_at_1000
value: 28.762999999999998
- type: mrr_at_3
value: 25.518
- type: mrr_at_5
value: 26.762000000000004
- type: ndcg_at_1
value: 20.522000000000002
- type: ndcg_at_10
value: 27.791
- type: ndcg_at_100
value: 33.101
- type: ndcg_at_1000
value: 36.075
- type: ndcg_at_3
value: 23.74
- type: ndcg_at_5
value: 25.691000000000003
- type: precision_at_1
value: 20.522000000000002
- type: precision_at_10
value: 4.963
- type: precision_at_100
value: 0.873
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 11.111
- type: precision_at_5
value: 8.01
- type: recall_at_1
value: 16.634
- type: recall_at_10
value: 37.498
- type: recall_at_100
value: 60.598
- type: recall_at_1000
value: 81.828
- type: recall_at_3
value: 26.136
- type: recall_at_5
value: 31.211
- type: map_at_1
value: 28.200999999999997
- type: map_at_10
value: 37.619
- type: map_at_100
value: 38.834999999999994
- type: map_at_1000
value: 38.951
- type: map_at_3
value: 35.119
- type: map_at_5
value: 36.559999999999995
- type: mrr_at_1
value: 33.782000000000004
- type: mrr_at_10
value: 43.033
- type: mrr_at_100
value: 43.761
- type: mrr_at_1000
value: 43.818
- type: mrr_at_3
value: 40.727999999999994
- type: mrr_at_5
value: 42.129
- type: ndcg_at_1
value: 33.782000000000004
- type: ndcg_at_10
value: 43.178
- type: ndcg_at_100
value: 48.27
- type: ndcg_at_1000
value: 50.559
- type: ndcg_at_3
value: 38.974
- type: ndcg_at_5
value: 41.019
- type: precision_at_1
value: 33.782000000000004
- type: precision_at_10
value: 7.575
- type: precision_at_100
value: 1.1820000000000002
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 18.223
- type: precision_at_5
value: 12.742999999999999
- type: recall_at_1
value: 28.200999999999997
- type: recall_at_10
value: 54.089
- type: recall_at_100
value: 75.57000000000001
- type: recall_at_1000
value: 90.827
- type: recall_at_3
value: 42.435
- type: recall_at_5
value: 47.652
- type: map_at_1
value: 25.313000000000002
- type: map_at_10
value: 34.329
- type: map_at_100
value: 35.445
- type: map_at_1000
value: 35.556
- type: map_at_3
value: 31.659
- type: map_at_5
value: 32.981
- type: mrr_at_1
value: 30.822
- type: mrr_at_10
value: 39.084
- type: mrr_at_100
value: 39.97
- type: mrr_at_1000
value: 40.025
- type: mrr_at_3
value: 36.815
- type: mrr_at_5
value: 38.002
- type: ndcg_at_1
value: 30.822
- type: ndcg_at_10
value: 39.512
- type: ndcg_at_100
value: 44.925
- type: ndcg_at_1000
value: 47.274
- type: ndcg_at_3
value: 35.055
- type: ndcg_at_5
value: 36.788
- type: precision_at_1
value: 30.822
- type: precision_at_10
value: 7.1
- type: precision_at_100
value: 1.15
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 16.476
- type: precision_at_5
value: 11.461
- type: recall_at_1
value: 25.313000000000002
- type: recall_at_10
value: 50.178
- type: recall_at_100
value: 74.312
- type: recall_at_1000
value: 90.50200000000001
- type: recall_at_3
value: 37.626
- type: recall_at_5
value: 42.34
- type: map_at_1
value: 25.502250000000004
- type: map_at_10
value: 33.655166666666666
- type: map_at_100
value: 34.72833333333333
- type: map_at_1000
value: 34.84375
- type: map_at_3
value: 31.253999999999998
- type: map_at_5
value: 32.55075
- type: mrr_at_1
value: 29.91975
- type: mrr_at_10
value: 37.65441666666667
- type: mrr_at_100
value: 38.464416666666665
- type: mrr_at_1000
value: 38.52591666666667
- type: mrr_at_3
value: 35.57858333333333
- type: mrr_at_5
value: 36.71083333333333
- type: ndcg_at_1
value: 29.91975
- type: ndcg_at_10
value: 38.47316666666667
- type: ndcg_at_100
value: 43.256416666666674
- type: ndcg_at_1000
value: 45.70658333333333
- type: ndcg_at_3
value: 34.350833333333334
- type: ndcg_at_5
value: 36.184583333333336
- type: precision_at_1
value: 29.91975
- type: precision_at_10
value: 6.5489999999999995
- type: precision_at_100
value: 1.0553333333333332
- type: precision_at_1000
value: 0.14516666666666667
- type: precision_at_3
value: 15.579083333333333
- type: precision_at_5
value: 10.851083333333332
- type: recall_at_1
value: 25.502250000000004
- type: recall_at_10
value: 48.7965
- type: recall_at_100
value: 69.93500000000002
- type: recall_at_1000
value: 87.17049999999999
- type: recall_at_3
value: 37.20433333333333
- type: recall_at_5
value: 42.00783333333333
- type: map_at_1
value: 23.777
- type: map_at_10
value: 29.932
- type: map_at_100
value: 30.778
- type: map_at_1000
value: 30.879
- type: map_at_3
value: 27.898
- type: map_at_5
value: 29.086000000000002
- type: mrr_at_1
value: 26.227
- type: mrr_at_10
value: 32.443
- type: mrr_at_100
value: 33.212
- type: mrr_at_1000
value: 33.29
- type: mrr_at_3
value: 30.419
- type: mrr_at_5
value: 31.616
- type: ndcg_at_1
value: 26.227
- type: ndcg_at_10
value: 33.774
- type: ndcg_at_100
value: 37.917
- type: ndcg_at_1000
value: 40.557
- type: ndcg_at_3
value: 29.875
- type: ndcg_at_5
value: 31.845000000000002
- type: precision_at_1
value: 26.227
- type: precision_at_10
value: 5.153
- type: precision_at_100
value: 0.784
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 12.423
- type: precision_at_5
value: 8.773
- type: recall_at_1
value: 23.777
- type: recall_at_10
value: 43.142
- type: recall_at_100
value: 61.68900000000001
- type: recall_at_1000
value: 81.37100000000001
- type: recall_at_3
value: 32.582
- type: recall_at_5
value: 37.403
- type: map_at_1
value: 16.659
- type: map_at_10
value: 22.926
- type: map_at_100
value: 23.837
- type: map_at_1000
value: 23.953
- type: map_at_3
value: 21.029999999999998
- type: map_at_5
value: 22.019
- type: mrr_at_1
value: 19.649
- type: mrr_at_10
value: 26.32
- type: mrr_at_100
value: 27.143
- type: mrr_at_1000
value: 27.222
- type: mrr_at_3
value: 24.484
- type: mrr_at_5
value: 25.468000000000004
- type: ndcg_at_1
value: 19.649
- type: ndcg_at_10
value: 26.941
- type: ndcg_at_100
value: 31.522
- type: ndcg_at_1000
value: 34.538999999999994
- type: ndcg_at_3
value: 23.419999999999998
- type: ndcg_at_5
value: 24.927
- type: precision_at_1
value: 19.649
- type: precision_at_10
value: 4.7010000000000005
- type: precision_at_100
value: 0.8130000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 10.735999999999999
- type: precision_at_5
value: 7.591
- type: recall_at_1
value: 16.659
- type: recall_at_10
value: 35.721000000000004
- type: recall_at_100
value: 56.43
- type: recall_at_1000
value: 78.464
- type: recall_at_3
value: 25.878
- type: recall_at_5
value: 29.731999999999996
- type: map_at_1
value: 24.309
- type: map_at_10
value: 31.990000000000002
- type: map_at_100
value: 32.895
- type: map_at_1000
value: 33.0
- type: map_at_3
value: 29.848999999999997
- type: map_at_5
value: 30.942999999999998
- type: mrr_at_1
value: 28.638
- type: mrr_at_10
value: 36.036
- type: mrr_at_100
value: 36.787
- type: mrr_at_1000
value: 36.855
- type: mrr_at_3
value: 34.08
- type: mrr_at_5
value: 35.073
- type: ndcg_at_1
value: 28.638
- type: ndcg_at_10
value: 36.588
- type: ndcg_at_100
value: 41.152
- type: ndcg_at_1000
value: 43.769999999999996
- type: ndcg_at_3
value: 32.632
- type: ndcg_at_5
value: 34.249
- type: precision_at_1
value: 28.638
- type: precision_at_10
value: 5.942
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 14.582999999999998
- type: precision_at_5
value: 9.944
- type: recall_at_1
value: 24.309
- type: recall_at_10
value: 46.725
- type: recall_at_100
value: 67.11
- type: recall_at_1000
value: 85.91499999999999
- type: recall_at_3
value: 35.72
- type: recall_at_5
value: 39.854
- type: map_at_1
value: 22.997999999999998
- type: map_at_10
value: 30.564000000000004
- type: map_at_100
value: 32.06
- type: map_at_1000
value: 32.282
- type: map_at_3
value: 28.12
- type: map_at_5
value: 29.395
- type: mrr_at_1
value: 27.075
- type: mrr_at_10
value: 34.510999999999996
- type: mrr_at_100
value: 35.549
- type: mrr_at_1000
value: 35.616
- type: mrr_at_3
value: 32.444
- type: mrr_at_5
value: 33.589999999999996
- type: ndcg_at_1
value: 27.075
- type: ndcg_at_10
value: 35.582
- type: ndcg_at_100
value: 41.308
- type: ndcg_at_1000
value: 44.385999999999996
- type: ndcg_at_3
value: 31.467
- type: ndcg_at_5
value: 33.189
- type: precision_at_1
value: 27.075
- type: precision_at_10
value: 6.68
- type: precision_at_100
value: 1.427
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 14.625
- type: precision_at_5
value: 10.356
- type: recall_at_1
value: 22.997999999999998
- type: recall_at_10
value: 45.196
- type: recall_at_100
value: 70.319
- type: recall_at_1000
value: 90.766
- type: recall_at_3
value: 33.487
- type: recall_at_5
value: 38.297
- type: map_at_1
value: 20.961
- type: map_at_10
value: 27.58
- type: map_at_100
value: 28.542
- type: map_at_1000
value: 28.644
- type: map_at_3
value: 25.541000000000004
- type: map_at_5
value: 26.589000000000002
- type: mrr_at_1
value: 22.551
- type: mrr_at_10
value: 29.298999999999996
- type: mrr_at_100
value: 30.17
- type: mrr_at_1000
value: 30.248
- type: mrr_at_3
value: 27.542
- type: mrr_at_5
value: 28.392
- type: ndcg_at_1
value: 22.551
- type: ndcg_at_10
value: 31.55
- type: ndcg_at_100
value: 36.295
- type: ndcg_at_1000
value: 38.964
- type: ndcg_at_3
value: 27.663
- type: ndcg_at_5
value: 29.321
- type: precision_at_1
value: 22.551
- type: precision_at_10
value: 4.88
- type: precision_at_100
value: 0.7779999999999999
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 11.83
- type: precision_at_5
value: 8.17
- type: recall_at_1
value: 20.961
- type: recall_at_10
value: 42.07
- type: recall_at_100
value: 63.982000000000006
- type: recall_at_1000
value: 83.889
- type: recall_at_3
value: 31.445
- type: recall_at_5
value: 35.410000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.314
- type: map_at_10
value: 18.983
- type: map_at_100
value: 20.851
- type: map_at_1000
value: 21.066
- type: map_at_3
value: 16.014
- type: map_at_5
value: 17.569000000000003
- type: mrr_at_1
value: 25.277
- type: mrr_at_10
value: 36.657000000000004
- type: mrr_at_100
value: 37.646
- type: mrr_at_1000
value: 37.686
- type: mrr_at_3
value: 33.17
- type: mrr_at_5
value: 35.232
- type: ndcg_at_1
value: 25.277
- type: ndcg_at_10
value: 27.011000000000003
- type: ndcg_at_100
value: 34.418
- type: ndcg_at_1000
value: 38.089
- type: ndcg_at_3
value: 22.026
- type: ndcg_at_5
value: 23.866
- type: precision_at_1
value: 25.277
- type: precision_at_10
value: 8.397
- type: precision_at_100
value: 1.6320000000000001
- type: precision_at_1000
value: 0.22999999999999998
- type: precision_at_3
value: 16.156000000000002
- type: precision_at_5
value: 12.612000000000002
- type: recall_at_1
value: 11.314
- type: recall_at_10
value: 32.474
- type: recall_at_100
value: 57.926
- type: recall_at_1000
value: 78.387
- type: recall_at_3
value: 20.415
- type: recall_at_5
value: 25.407999999999998
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.835999999999999
- type: map_at_10
value: 19.73
- type: map_at_100
value: 28.011000000000003
- type: map_at_1000
value: 29.519000000000002
- type: map_at_3
value: 14.249
- type: map_at_5
value: 16.472
- type: mrr_at_1
value: 67.0
- type: mrr_at_10
value: 74.632
- type: mrr_at_100
value: 74.97200000000001
- type: mrr_at_1000
value: 74.97500000000001
- type: mrr_at_3
value: 72.958
- type: mrr_at_5
value: 73.908
- type: ndcg_at_1
value: 55.875
- type: ndcg_at_10
value: 42.071999999999996
- type: ndcg_at_100
value: 46.091
- type: ndcg_at_1000
value: 52.737
- type: ndcg_at_3
value: 47.079
- type: ndcg_at_5
value: 43.788
- type: precision_at_1
value: 67.0
- type: precision_at_10
value: 33.45
- type: precision_at_100
value: 10.633
- type: precision_at_1000
value: 2.067
- type: precision_at_3
value: 49.583
- type: precision_at_5
value: 41.25
- type: recall_at_1
value: 8.835999999999999
- type: recall_at_10
value: 24.872
- type: recall_at_100
value: 51.427
- type: recall_at_1000
value: 72.17099999999999
- type: recall_at_3
value: 15.631999999999998
- type: recall_at_5
value: 18.956
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.80500000000001
- type: f1
value: 43.91955883597831
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 61.480999999999995
- type: map_at_10
value: 72.162
- type: map_at_100
value: 72.487
- type: map_at_1000
value: 72.504
- type: map_at_3
value: 70.354
- type: map_at_5
value: 71.509
- type: mrr_at_1
value: 66.262
- type: mrr_at_10
value: 76.605
- type: mrr_at_100
value: 76.833
- type: mrr_at_1000
value: 76.839
- type: mrr_at_3
value: 74.977
- type: mrr_at_5
value: 76.06
- type: ndcg_at_1
value: 66.262
- type: ndcg_at_10
value: 77.323
- type: ndcg_at_100
value: 78.685
- type: ndcg_at_1000
value: 79.032
- type: ndcg_at_3
value: 74.015
- type: ndcg_at_5
value: 75.916
- type: precision_at_1
value: 66.262
- type: precision_at_10
value: 9.757
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 29.032999999999998
- type: precision_at_5
value: 18.5
- type: recall_at_1
value: 61.480999999999995
- type: recall_at_10
value: 88.878
- type: recall_at_100
value: 94.719
- type: recall_at_1000
value: 97.066
- type: recall_at_3
value: 79.95100000000001
- type: recall_at_5
value: 84.691
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.925
- type: map_at_10
value: 31.621
- type: map_at_100
value: 33.282000000000004
- type: map_at_1000
value: 33.455
- type: map_at_3
value: 27.504
- type: map_at_5
value: 29.921999999999997
- type: mrr_at_1
value: 39.660000000000004
- type: mrr_at_10
value: 47.366
- type: mrr_at_100
value: 48.179
- type: mrr_at_1000
value: 48.219
- type: mrr_at_3
value: 45.062000000000005
- type: mrr_at_5
value: 46.404
- type: ndcg_at_1
value: 39.660000000000004
- type: ndcg_at_10
value: 39.019
- type: ndcg_at_100
value: 45.286
- type: ndcg_at_1000
value: 48.370000000000005
- type: ndcg_at_3
value: 35.421
- type: ndcg_at_5
value: 36.767
- type: precision_at_1
value: 39.660000000000004
- type: precision_at_10
value: 10.494
- type: precision_at_100
value: 1.7069999999999999
- type: precision_at_1000
value: 0.22599999999999998
- type: precision_at_3
value: 23.200000000000003
- type: precision_at_5
value: 17.253
- type: recall_at_1
value: 19.925
- type: recall_at_10
value: 45.48
- type: recall_at_100
value: 68.585
- type: recall_at_1000
value: 87.128
- type: recall_at_3
value: 31.913000000000004
- type: recall_at_5
value: 38.107
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.961
- type: map_at_10
value: 55.010000000000005
- type: map_at_100
value: 55.896
- type: map_at_1000
value: 55.962
- type: map_at_3
value: 52.03
- type: map_at_5
value: 53.866
- type: mrr_at_1
value: 75.922
- type: mrr_at_10
value: 81.655
- type: mrr_at_100
value: 81.879
- type: mrr_at_1000
value: 81.889
- type: mrr_at_3
value: 80.657
- type: mrr_at_5
value: 81.291
- type: ndcg_at_1
value: 75.922
- type: ndcg_at_10
value: 64.119
- type: ndcg_at_100
value: 67.25
- type: ndcg_at_1000
value: 68.55499999999999
- type: ndcg_at_3
value: 59.792
- type: ndcg_at_5
value: 62.165000000000006
- type: precision_at_1
value: 75.922
- type: precision_at_10
value: 13.155
- type: precision_at_100
value: 1.5599999999999998
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 37.461
- type: precision_at_5
value: 24.351
- type: recall_at_1
value: 37.961
- type: recall_at_10
value: 65.77300000000001
- type: recall_at_100
value: 78.015
- type: recall_at_1000
value: 86.685
- type: recall_at_3
value: 56.192
- type: recall_at_5
value: 60.878
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 83.7804
- type: ap
value: 78.89508987851809
- type: f1
value: 83.72392373438922
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.807000000000002
- type: map_at_10
value: 36.411
- type: map_at_100
value: 37.574000000000005
- type: map_at_1000
value: 37.618
- type: map_at_3
value: 32.653
- type: map_at_5
value: 34.902
- type: mrr_at_1
value: 24.499000000000002
- type: mrr_at_10
value: 37.045
- type: mrr_at_100
value: 38.135999999999996
- type: mrr_at_1000
value: 38.175
- type: mrr_at_3
value: 33.326
- type: mrr_at_5
value: 35.561
- type: ndcg_at_1
value: 24.512999999999998
- type: ndcg_at_10
value: 43.328
- type: ndcg_at_100
value: 48.779
- type: ndcg_at_1000
value: 49.897999999999996
- type: ndcg_at_3
value: 35.713
- type: ndcg_at_5
value: 39.729
- type: precision_at_1
value: 24.512999999999998
- type: precision_at_10
value: 6.7379999999999995
- type: precision_at_100
value: 0.9450000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.196000000000002
- type: precision_at_5
value: 11.158
- type: recall_at_1
value: 23.807000000000002
- type: recall_at_10
value: 64.488
- type: recall_at_100
value: 89.386
- type: recall_at_1000
value: 97.968
- type: recall_at_3
value: 43.891000000000005
- type: recall_at_5
value: 53.535
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.47013223894209
- type: f1
value: 93.15020887152107
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.27131782945737
- type: f1
value: 58.45703758149779
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.76395427034298
- type: f1
value: 70.6084399610629
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.69804976462676
- type: f1
value: 76.61599181962723
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.7253797676744
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.547731924629424
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.286918745183772
- type: mrr
value: 32.47449315230336
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.894
- type: map_at_10
value: 13.405000000000001
- type: map_at_100
value: 16.586000000000002
- type: map_at_1000
value: 17.919
- type: map_at_3
value: 10.066
- type: map_at_5
value: 11.679
- type: mrr_at_1
value: 45.201
- type: mrr_at_10
value: 54.018
- type: mrr_at_100
value: 54.581999999999994
- type: mrr_at_1000
value: 54.623
- type: mrr_at_3
value: 51.6
- type: mrr_at_5
value: 53.473000000000006
- type: ndcg_at_1
value: 43.189
- type: ndcg_at_10
value: 35.306
- type: ndcg_at_100
value: 31.505
- type: ndcg_at_1000
value: 39.991
- type: ndcg_at_3
value: 41.108
- type: ndcg_at_5
value: 39.039
- type: precision_at_1
value: 44.582
- type: precision_at_10
value: 26.161
- type: precision_at_100
value: 7.867
- type: precision_at_1000
value: 2.043
- type: precision_at_3
value: 39.112
- type: precision_at_5
value: 34.18
- type: recall_at_1
value: 5.894
- type: recall_at_10
value: 16.88
- type: recall_at_100
value: 30.671
- type: recall_at_1000
value: 61.42999999999999
- type: recall_at_3
value: 11.022
- type: recall_at_5
value: 13.697999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 38.440999999999995
- type: map_at_10
value: 54.187
- type: map_at_100
value: 55.022000000000006
- type: map_at_1000
value: 55.044000000000004
- type: map_at_3
value: 50.174
- type: map_at_5
value: 52.61
- type: mrr_at_1
value: 42.903000000000006
- type: mrr_at_10
value: 56.699
- type: mrr_at_100
value: 57.31
- type: mrr_at_1000
value: 57.325
- type: mrr_at_3
value: 53.63099999999999
- type: mrr_at_5
value: 55.596000000000004
- type: ndcg_at_1
value: 42.903000000000006
- type: ndcg_at_10
value: 61.434
- type: ndcg_at_100
value: 64.852
- type: ndcg_at_1000
value: 65.36
- type: ndcg_at_3
value: 54.193000000000005
- type: ndcg_at_5
value: 58.15
- type: precision_at_1
value: 42.903000000000006
- type: precision_at_10
value: 9.623
- type: precision_at_100
value: 1.1560000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 24.034
- type: precision_at_5
value: 16.779
- type: recall_at_1
value: 38.440999999999995
- type: recall_at_10
value: 80.72399999999999
- type: recall_at_100
value: 95.329
- type: recall_at_1000
value: 99.059
- type: recall_at_3
value: 62.343
- type: recall_at_5
value: 71.304
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.85000000000001
- type: map_at_10
value: 84.54
- type: map_at_100
value: 85.148
- type: map_at_1000
value: 85.168
- type: map_at_3
value: 81.631
- type: map_at_5
value: 83.45700000000001
- type: mrr_at_1
value: 81.58
- type: mrr_at_10
value: 87.732
- type: mrr_at_100
value: 87.825
- type: mrr_at_1000
value: 87.82600000000001
- type: mrr_at_3
value: 86.783
- type: mrr_at_5
value: 87.437
- type: ndcg_at_1
value: 81.56
- type: ndcg_at_10
value: 88.32900000000001
- type: ndcg_at_100
value: 89.513
- type: ndcg_at_1000
value: 89.63799999999999
- type: ndcg_at_3
value: 85.51100000000001
- type: ndcg_at_5
value: 87.062
- type: precision_at_1
value: 81.56
- type: precision_at_10
value: 13.349
- type: precision_at_100
value: 1.518
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 37.293
- type: precision_at_5
value: 24.502
- type: recall_at_1
value: 70.85000000000001
- type: recall_at_10
value: 95.351
- type: recall_at_100
value: 99.405
- type: recall_at_1000
value: 99.958
- type: recall_at_3
value: 87.184
- type: recall_at_5
value: 91.625
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.81818576893834
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.57033658868022
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.468
- type: map_at_10
value: 11.109
- type: map_at_100
value: 12.921
- type: map_at_1000
value: 13.187999999999999
- type: map_at_3
value: 8.094999999999999
- type: map_at_5
value: 9.664
- type: mrr_at_1
value: 22.1
- type: mrr_at_10
value: 32.482
- type: mrr_at_100
value: 33.558
- type: mrr_at_1000
value: 33.623999999999995
- type: mrr_at_3
value: 29.25
- type: mrr_at_5
value: 31.080000000000002
- type: ndcg_at_1
value: 22.1
- type: ndcg_at_10
value: 18.695999999999998
- type: ndcg_at_100
value: 25.749
- type: ndcg_at_1000
value: 30.711
- type: ndcg_at_3
value: 17.974
- type: ndcg_at_5
value: 15.684000000000001
- type: precision_at_1
value: 22.1
- type: precision_at_10
value: 9.56
- type: precision_at_100
value: 1.966
- type: precision_at_1000
value: 0.316
- type: precision_at_3
value: 16.667
- type: precision_at_5
value: 13.68
- type: recall_at_1
value: 4.468
- type: recall_at_10
value: 19.373
- type: recall_at_100
value: 39.853
- type: recall_at_1000
value: 64.118
- type: recall_at_3
value: 10.133000000000001
- type: recall_at_5
value: 13.877999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 80.11452150923512
- type: cos_sim_spearman
value: 77.3007421887329
- type: euclidean_pearson
value: 78.2493681078981
- type: euclidean_spearman
value: 77.3007432741821
- type: manhattan_pearson
value: 78.19716818242554
- type: manhattan_spearman
value: 77.26439033199102
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 82.70293570563516
- type: cos_sim_spearman
value: 77.97040896962338
- type: euclidean_pearson
value: 77.98827330337348
- type: euclidean_spearman
value: 77.9704358930525
- type: manhattan_pearson
value: 78.06991702207395
- type: manhattan_spearman
value: 78.03857843100195
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.81236960157503
- type: cos_sim_spearman
value: 79.38801416063187
- type: euclidean_pearson
value: 79.35003045476847
- type: euclidean_spearman
value: 79.38797289536578
- type: manhattan_pearson
value: 79.33155563344724
- type: manhattan_spearman
value: 79.3858955436803
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 77.35604880089507
- type: cos_sim_spearman
value: 78.17327332594571
- type: euclidean_pearson
value: 77.30302038209295
- type: euclidean_spearman
value: 78.17327332594571
- type: manhattan_pearson
value: 77.31323781935417
- type: manhattan_spearman
value: 78.20141256686921
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 84.29348597583
- type: cos_sim_spearman
value: 85.50877410088334
- type: euclidean_pearson
value: 85.22367284169081
- type: euclidean_spearman
value: 85.50877410088334
- type: manhattan_pearson
value: 85.17979979737612
- type: manhattan_spearman
value: 85.46459282596254
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.16190794761513
- type: cos_sim_spearman
value: 84.94610605287254
- type: euclidean_pearson
value: 83.95587174131369
- type: euclidean_spearman
value: 84.94610605287254
- type: manhattan_pearson
value: 83.99025745366798
- type: manhattan_spearman
value: 84.98123107148953
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.3047190687711
- type: cos_sim_spearman
value: 85.86642469958113
- type: euclidean_pearson
value: 86.74377658528041
- type: euclidean_spearman
value: 85.86642469958113
- type: manhattan_pearson
value: 86.56967885987439
- type: manhattan_spearman
value: 85.63613272583275
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 64.8298932792099
- type: cos_sim_spearman
value: 64.27626667878636
- type: euclidean_pearson
value: 66.01603861201576
- type: euclidean_spearman
value: 64.27626667878636
- type: manhattan_pearson
value: 66.31232809448106
- type: manhattan_spearman
value: 64.46190921631559
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.73696291316243
- type: cos_sim_spearman
value: 83.41508337893958
- type: euclidean_pearson
value: 82.8827053024064
- type: euclidean_spearman
value: 83.41508337893958
- type: manhattan_pearson
value: 82.85613329045803
- type: manhattan_spearman
value: 83.40522047443645
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 75.51490079179645
- type: mrr
value: 92.6809655486126
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.594
- type: map_at_10
value: 67.208
- type: map_at_100
value: 67.702
- type: map_at_1000
value: 67.73
- type: map_at_3
value: 64.815
- type: map_at_5
value: 65.946
- type: mrr_at_1
value: 61.667
- type: mrr_at_10
value: 68.52000000000001
- type: mrr_at_100
value: 68.888
- type: mrr_at_1000
value: 68.911
- type: mrr_at_3
value: 66.833
- type: mrr_at_5
value: 67.617
- type: ndcg_at_1
value: 61.667
- type: ndcg_at_10
value: 71.511
- type: ndcg_at_100
value: 73.765
- type: ndcg_at_1000
value: 74.40299999999999
- type: ndcg_at_3
value: 67.411
- type: ndcg_at_5
value: 68.88
- type: precision_at_1
value: 61.667
- type: precision_at_10
value: 9.433
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.222
- type: precision_at_5
value: 16.866999999999997
- type: recall_at_1
value: 58.594
- type: recall_at_10
value: 83.439
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 99.0
- type: recall_at_3
value: 71.922
- type: recall_at_5
value: 75.678
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.7990099009901
- type: cos_sim_ap
value: 94.8316184070519
- type: cos_sim_f1
value: 89.75265017667844
- type: cos_sim_precision
value: 90.62181447502549
- type: cos_sim_recall
value: 88.9
- type: dot_accuracy
value: 99.7990099009901
- type: dot_ap
value: 94.831611518794
- type: dot_f1
value: 89.75265017667844
- type: dot_precision
value: 90.62181447502549
- type: dot_recall
value: 88.9
- type: euclidean_accuracy
value: 99.7990099009901
- type: euclidean_ap
value: 94.83161335144017
- type: euclidean_f1
value: 89.75265017667844
- type: euclidean_precision
value: 90.62181447502549
- type: euclidean_recall
value: 88.9
- type: manhattan_accuracy
value: 99.8
- type: manhattan_ap
value: 94.84210829841739
- type: manhattan_f1
value: 89.60905349794238
- type: manhattan_precision
value: 92.26694915254238
- type: manhattan_recall
value: 87.1
- type: max_accuracy
value: 99.8
- type: max_ap
value: 94.84210829841739
- type: max_f1
value: 89.75265017667844
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 63.18343792633894
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.50944549814364
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 48.89100016028111
- type: mrr
value: 49.607630931160344
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.628145384101522
- type: cos_sim_spearman
value: 31.275306930726675
- type: dot_pearson
value: 30.62814883550051
- type: dot_spearman
value: 31.275306930726675
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.26
- type: map_at_10
value: 2.163
- type: map_at_100
value: 12.29
- type: map_at_1000
value: 29.221999999999998
- type: map_at_3
value: 0.729
- type: map_at_5
value: 1.161
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 89.0
- type: ndcg_at_10
value: 82.312
- type: ndcg_at_100
value: 61.971
- type: ndcg_at_1000
value: 54.065
- type: ndcg_at_3
value: 87.87700000000001
- type: ndcg_at_5
value: 85.475
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 87.4
- type: precision_at_100
value: 64.02
- type: precision_at_1000
value: 24.093999999999998
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 90.8
- type: recall_at_1
value: 0.26
- type: recall_at_10
value: 2.302
- type: recall_at_100
value: 15.148
- type: recall_at_1000
value: 50.55
- type: recall_at_3
value: 0.744
- type: recall_at_5
value: 1.198
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.217
- type: map_at_10
value: 11.378
- type: map_at_100
value: 18.022
- type: map_at_1000
value: 19.544
- type: map_at_3
value: 6.079
- type: map_at_5
value: 8.559
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 48.423
- type: mrr_at_100
value: 49.028
- type: mrr_at_1000
value: 49.028
- type: mrr_at_3
value: 44.897999999999996
- type: mrr_at_5
value: 46.531
- type: ndcg_at_1
value: 25.509999999999998
- type: ndcg_at_10
value: 27.860000000000003
- type: ndcg_at_100
value: 39.34
- type: ndcg_at_1000
value: 50.21
- type: ndcg_at_3
value: 30.968
- type: ndcg_at_5
value: 29.541
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 25.918000000000003
- type: precision_at_100
value: 8.184
- type: precision_at_1000
value: 1.545
- type: precision_at_3
value: 35.374
- type: precision_at_5
value: 31.837
- type: recall_at_1
value: 2.217
- type: recall_at_10
value: 18.511
- type: recall_at_100
value: 50.178
- type: recall_at_1000
value: 83.07600000000001
- type: recall_at_3
value: 7.811999999999999
- type: recall_at_5
value: 11.684
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.386
- type: ap
value: 14.58573366644018
- type: f1
value: 55.0170316975105
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.868704018109796
- type: f1
value: 61.175908652496624
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 48.72082824812323
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.43839780652083
- type: cos_sim_ap
value: 72.55258980537292
- type: cos_sim_f1
value: 66.4145419055752
- type: cos_sim_precision
value: 61.765373269798054
- type: cos_sim_recall
value: 71.82058047493403
- type: dot_accuracy
value: 85.43839780652083
- type: dot_ap
value: 72.55256370197756
- type: dot_f1
value: 66.4145419055752
- type: dot_precision
value: 61.765373269798054
- type: dot_recall
value: 71.82058047493403
- type: euclidean_accuracy
value: 85.43839780652083
- type: euclidean_ap
value: 72.55259011957311
- type: euclidean_f1
value: 66.4145419055752
- type: euclidean_precision
value: 61.765373269798054
- type: euclidean_recall
value: 71.82058047493403
- type: manhattan_accuracy
value: 85.40263455921799
- type: manhattan_ap
value: 72.47856062032
- type: manhattan_f1
value: 66.39413249969942
- type: manhattan_precision
value: 60.989617848464775
- type: manhattan_recall
value: 72.84960422163589
- type: max_accuracy
value: 85.43839780652083
- type: max_ap
value: 72.55259011957311
- type: max_f1
value: 66.4145419055752
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.24981565568363
- type: cos_sim_ap
value: 86.38437585690401
- type: cos_sim_f1
value: 78.79039565086076
- type: cos_sim_precision
value: 77.29629629629629
- type: cos_sim_recall
value: 80.34339390206344
- type: dot_accuracy
value: 89.24981565568363
- type: dot_ap
value: 86.38437587564587
- type: dot_f1
value: 78.79039565086076
- type: dot_precision
value: 77.29629629629629
- type: dot_recall
value: 80.34339390206344
- type: euclidean_accuracy
value: 89.24981565568363
- type: euclidean_ap
value: 86.38437691024106
- type: euclidean_f1
value: 78.79039565086076
- type: euclidean_precision
value: 77.29629629629629
- type: euclidean_recall
value: 80.34339390206344
- type: manhattan_accuracy
value: 89.25563705514806
- type: manhattan_ap
value: 86.35729146774388
- type: manhattan_f1
value: 78.7238059278837
- type: manhattan_precision
value: 77.23938653034007
- type: manhattan_recall
value: 80.26639975361873
- type: max_accuracy
value: 89.25563705514806
- type: max_ap
value: 86.38437691024106
- type: max_f1
value: 78.79039565086076
---
# nomic-embed-text-v1-ablated: A Reproducible Long Context (8192) Text Embedder
`nomic-embed-text-v1-ablated` is 8192 context length text encoder. This is a checkpoint trained after modifying the training dataset to be different from the dataset used to train our [final model](https://huggingface.co/nomic-ai/nomic-embed-text-v1). The purpose of releasing this checkpoint is to understand the impact that subsets of our training data had on model outcomes. This release is part of our commitment to open-source training artifacts from our Nomic Embed Text tech report [here](https://arxiv.org/pdf/2402.01613)
If you want to use a model to extract embeddings, we suggest using [nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1).
# Join the Nomic Community
- Nomic: [https://nomic.ai](https://nomic.ai)
- Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
- Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716 | 1,716 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | [
"SCIFACT"
] | Non_BioNLP |
Teradata/bge-base-en-v1.5 | Teradata | feature-extraction | [
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"teradata",
"en",
"license:mit",
"model-index",
"region:us"
] | 1,739 | 1,741 | 32 | 0 | ---
language:
- en
license: mit
tags:
- feature-extraction
- sentence-similarity
- mteb
- onnx
- teradata
model-index:
- name: bge-base-en-v1.5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.14925373134328
- type: ap
value: 39.32336517995478
- type: f1
value: 70.16902252611425
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.386825
- type: ap
value: 90.21276917991995
- type: f1
value: 93.37741030006174
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.846000000000004
- type: f1
value: 48.14646269778261
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.754000000000005
- type: map_at_10
value: 55.761
- type: map_at_100
value: 56.330999999999996
- type: map_at_1000
value: 56.333999999999996
- type: map_at_3
value: 51.92
- type: map_at_5
value: 54.010999999999996
- type: mrr_at_1
value: 41.181
- type: mrr_at_10
value: 55.967999999999996
- type: mrr_at_100
value: 56.538
- type: mrr_at_1000
value: 56.542
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.208999999999996
- type: ndcg_at_1
value: 40.754000000000005
- type: ndcg_at_10
value: 63.605000000000004
- type: ndcg_at_100
value: 66.05199999999999
- type: ndcg_at_1000
value: 66.12
- type: ndcg_at_3
value: 55.708
- type: ndcg_at_5
value: 59.452000000000005
- type: precision_at_1
value: 40.754000000000005
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.149000000000001
- type: recall_at_1
value: 40.754000000000005
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 75.747
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.74884539679369
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.8075893810716
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.128470519187736
- type: mrr
value: 74.28065778481289
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.24629081484655
- type: cos_sim_spearman
value: 86.93752309911496
- type: euclidean_pearson
value: 87.58589628573816
- type: euclidean_spearman
value: 88.05622328825284
- type: manhattan_pearson
value: 87.5594959805773
- type: manhattan_spearman
value: 88.19658793233961
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.9512987012987
- type: f1
value: 86.92515357973708
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10263762928872
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.69711517426737
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.327
- type: map_at_10
value: 44.099
- type: map_at_100
value: 45.525
- type: map_at_1000
value: 45.641999999999996
- type: map_at_3
value: 40.47
- type: map_at_5
value: 42.36
- type: mrr_at_1
value: 39.199
- type: mrr_at_10
value: 49.651
- type: mrr_at_100
value: 50.29
- type: mrr_at_1000
value: 50.329
- type: mrr_at_3
value: 46.924
- type: mrr_at_5
value: 48.548
- type: ndcg_at_1
value: 39.199
- type: ndcg_at_10
value: 50.773
- type: ndcg_at_100
value: 55.67999999999999
- type: ndcg_at_1000
value: 57.495
- type: ndcg_at_3
value: 45.513999999999996
- type: ndcg_at_5
value: 47.703
- type: precision_at_1
value: 39.199
- type: precision_at_10
value: 9.914000000000001
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.984
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.327
- type: recall_at_10
value: 63.743
- type: recall_at_100
value: 84.538
- type: recall_at_1000
value: 96.089
- type: recall_at_3
value: 48.065000000000005
- type: recall_at_5
value: 54.519
- type: map_at_1
value: 32.671
- type: map_at_10
value: 42.954
- type: map_at_100
value: 44.151
- type: map_at_1000
value: 44.287
- type: map_at_3
value: 39.912
- type: map_at_5
value: 41.798
- type: mrr_at_1
value: 41.465
- type: mrr_at_10
value: 49.351
- type: mrr_at_100
value: 49.980000000000004
- type: mrr_at_1000
value: 50.016000000000005
- type: mrr_at_3
value: 47.144000000000005
- type: mrr_at_5
value: 48.592999999999996
- type: ndcg_at_1
value: 41.465
- type: ndcg_at_10
value: 48.565999999999995
- type: ndcg_at_100
value: 52.76499999999999
- type: ndcg_at_1000
value: 54.749
- type: ndcg_at_3
value: 44.57
- type: ndcg_at_5
value: 46.759
- type: precision_at_1
value: 41.465
- type: precision_at_10
value: 9.107999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.423000000000002
- type: precision_at_5
value: 15.414
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 57.738
- type: recall_at_100
value: 75.86500000000001
- type: recall_at_1000
value: 88.36
- type: recall_at_3
value: 45.626
- type: recall_at_5
value: 51.812000000000005
- type: map_at_1
value: 41.185
- type: map_at_10
value: 53.929
- type: map_at_100
value: 54.92
- type: map_at_1000
value: 54.967999999999996
- type: map_at_3
value: 50.70400000000001
- type: map_at_5
value: 52.673
- type: mrr_at_1
value: 47.398
- type: mrr_at_10
value: 57.303000000000004
- type: mrr_at_100
value: 57.959
- type: mrr_at_1000
value: 57.985
- type: mrr_at_3
value: 54.932
- type: mrr_at_5
value: 56.464999999999996
- type: ndcg_at_1
value: 47.398
- type: ndcg_at_10
value: 59.653
- type: ndcg_at_100
value: 63.627
- type: ndcg_at_1000
value: 64.596
- type: ndcg_at_3
value: 54.455
- type: ndcg_at_5
value: 57.245000000000005
- type: precision_at_1
value: 47.398
- type: precision_at_10
value: 9.524000000000001
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.389
- type: precision_at_5
value: 16.752
- type: recall_at_1
value: 41.185
- type: recall_at_10
value: 73.193
- type: recall_at_100
value: 90.357
- type: recall_at_1000
value: 97.253
- type: recall_at_3
value: 59.199999999999996
- type: recall_at_5
value: 66.118
- type: map_at_1
value: 27.27
- type: map_at_10
value: 36.223
- type: map_at_100
value: 37.218
- type: map_at_1000
value: 37.293
- type: map_at_3
value: 33.503
- type: map_at_5
value: 35.097
- type: mrr_at_1
value: 29.492
- type: mrr_at_10
value: 38.352000000000004
- type: mrr_at_100
value: 39.188
- type: mrr_at_1000
value: 39.247
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.401
- type: ndcg_at_1
value: 29.492
- type: ndcg_at_10
value: 41.239
- type: ndcg_at_100
value: 46.066
- type: ndcg_at_1000
value: 47.992000000000004
- type: ndcg_at_3
value: 36.11
- type: ndcg_at_5
value: 38.772
- type: precision_at_1
value: 29.492
- type: precision_at_10
value: 6.260000000000001
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 15.104000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.27
- type: recall_at_10
value: 54.589
- type: recall_at_100
value: 76.70700000000001
- type: recall_at_1000
value: 91.158
- type: recall_at_3
value: 40.974
- type: recall_at_5
value: 47.327000000000005
- type: map_at_1
value: 17.848
- type: map_at_10
value: 26.207
- type: map_at_100
value: 27.478
- type: map_at_1000
value: 27.602
- type: map_at_3
value: 23.405
- type: map_at_5
value: 24.98
- type: mrr_at_1
value: 21.891
- type: mrr_at_10
value: 31.041999999999998
- type: mrr_at_100
value: 32.092
- type: mrr_at_1000
value: 32.151999999999994
- type: mrr_at_3
value: 28.358
- type: mrr_at_5
value: 29.969
- type: ndcg_at_1
value: 21.891
- type: ndcg_at_10
value: 31.585
- type: ndcg_at_100
value: 37.531
- type: ndcg_at_1000
value: 40.256
- type: ndcg_at_3
value: 26.508
- type: ndcg_at_5
value: 28.894
- type: precision_at_1
value: 21.891
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.769
- type: precision_at_5
value: 9.279
- type: recall_at_1
value: 17.848
- type: recall_at_10
value: 43.452
- type: recall_at_100
value: 69.216
- type: recall_at_1000
value: 88.102
- type: recall_at_3
value: 29.18
- type: recall_at_5
value: 35.347
- type: map_at_1
value: 30.94
- type: map_at_10
value: 41.248000000000005
- type: map_at_100
value: 42.495
- type: map_at_1000
value: 42.602000000000004
- type: map_at_3
value: 37.939
- type: map_at_5
value: 39.924
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.041
- type: mrr_at_100
value: 47.83
- type: mrr_at_1000
value: 47.878
- type: mrr_at_3
value: 44.466
- type: mrr_at_5
value: 46.111999999999995
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.223
- type: ndcg_at_100
value: 52.394
- type: ndcg_at_1000
value: 54.432
- type: ndcg_at_3
value: 42.032000000000004
- type: ndcg_at_5
value: 44.772
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.393
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 19.698
- type: precision_at_5
value: 14.013
- type: recall_at_1
value: 30.94
- type: recall_at_10
value: 59.316
- type: recall_at_100
value: 80.783
- type: recall_at_1000
value: 94.15400000000001
- type: recall_at_3
value: 44.712
- type: recall_at_5
value: 51.932
- type: map_at_1
value: 27.104
- type: map_at_10
value: 36.675999999999995
- type: map_at_100
value: 38.076
- type: map_at_1000
value: 38.189
- type: map_at_3
value: 33.733999999999995
- type: map_at_5
value: 35.287
- type: mrr_at_1
value: 33.904
- type: mrr_at_10
value: 42.55
- type: mrr_at_100
value: 43.434
- type: mrr_at_1000
value: 43.494
- type: mrr_at_3
value: 40.126
- type: mrr_at_5
value: 41.473
- type: ndcg_at_1
value: 33.904
- type: ndcg_at_10
value: 42.414
- type: ndcg_at_100
value: 48.203
- type: ndcg_at_1000
value: 50.437
- type: ndcg_at_3
value: 37.633
- type: ndcg_at_5
value: 39.67
- type: precision_at_1
value: 33.904
- type: precision_at_10
value: 7.82
- type: precision_at_100
value: 1.2409999999999999
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 17.884
- type: precision_at_5
value: 12.648000000000001
- type: recall_at_1
value: 27.104
- type: recall_at_10
value: 53.563
- type: recall_at_100
value: 78.557
- type: recall_at_1000
value: 93.533
- type: recall_at_3
value: 39.92
- type: recall_at_5
value: 45.457
- type: map_at_1
value: 27.707749999999997
- type: map_at_10
value: 36.961
- type: map_at_100
value: 38.158833333333334
- type: map_at_1000
value: 38.270333333333326
- type: map_at_3
value: 34.07183333333334
- type: map_at_5
value: 35.69533333333334
- type: mrr_at_1
value: 32.81875
- type: mrr_at_10
value: 41.293
- type: mrr_at_100
value: 42.116499999999995
- type: mrr_at_1000
value: 42.170249999999996
- type: mrr_at_3
value: 38.83983333333333
- type: mrr_at_5
value: 40.29775
- type: ndcg_at_1
value: 32.81875
- type: ndcg_at_10
value: 42.355
- type: ndcg_at_100
value: 47.41374999999999
- type: ndcg_at_1000
value: 49.5805
- type: ndcg_at_3
value: 37.52825
- type: ndcg_at_5
value: 39.83266666666667
- type: precision_at_1
value: 32.81875
- type: precision_at_10
value: 7.382416666666666
- type: precision_at_100
value: 1.1640833333333334
- type: precision_at_1000
value: 0.15383333333333335
- type: precision_at_3
value: 17.134166666666665
- type: precision_at_5
value: 12.174833333333336
- type: recall_at_1
value: 27.707749999999997
- type: recall_at_10
value: 53.945
- type: recall_at_100
value: 76.191
- type: recall_at_1000
value: 91.101
- type: recall_at_3
value: 40.39083333333334
- type: recall_at_5
value: 46.40083333333333
- type: map_at_1
value: 26.482
- type: map_at_10
value: 33.201
- type: map_at_100
value: 34.107
- type: map_at_1000
value: 34.197
- type: map_at_3
value: 31.174000000000003
- type: map_at_5
value: 32.279
- type: mrr_at_1
value: 29.908
- type: mrr_at_10
value: 36.235
- type: mrr_at_100
value: 37.04
- type: mrr_at_1000
value: 37.105
- type: mrr_at_3
value: 34.355999999999995
- type: mrr_at_5
value: 35.382999999999996
- type: ndcg_at_1
value: 29.908
- type: ndcg_at_10
value: 37.325
- type: ndcg_at_100
value: 41.795
- type: ndcg_at_1000
value: 44.105
- type: ndcg_at_3
value: 33.555
- type: ndcg_at_5
value: 35.266999999999996
- type: precision_at_1
value: 29.908
- type: precision_at_10
value: 5.721
- type: precision_at_100
value: 0.8630000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 14.008000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 26.482
- type: recall_at_10
value: 47.072
- type: recall_at_100
value: 67.27
- type: recall_at_1000
value: 84.371
- type: recall_at_3
value: 36.65
- type: recall_at_5
value: 40.774
- type: map_at_1
value: 18.815
- type: map_at_10
value: 26.369999999999997
- type: map_at_100
value: 27.458
- type: map_at_1000
value: 27.588
- type: map_at_3
value: 23.990000000000002
- type: map_at_5
value: 25.345000000000002
- type: mrr_at_1
value: 22.953000000000003
- type: mrr_at_10
value: 30.342999999999996
- type: mrr_at_100
value: 31.241000000000003
- type: mrr_at_1000
value: 31.319000000000003
- type: mrr_at_3
value: 28.16
- type: mrr_at_5
value: 29.406
- type: ndcg_at_1
value: 22.953000000000003
- type: ndcg_at_10
value: 31.151
- type: ndcg_at_100
value: 36.309000000000005
- type: ndcg_at_1000
value: 39.227000000000004
- type: ndcg_at_3
value: 26.921
- type: ndcg_at_5
value: 28.938000000000002
- type: precision_at_1
value: 22.953000000000003
- type: precision_at_10
value: 5.602
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.606
- type: precision_at_5
value: 9.119
- type: recall_at_1
value: 18.815
- type: recall_at_10
value: 41.574
- type: recall_at_100
value: 64.84400000000001
- type: recall_at_1000
value: 85.406
- type: recall_at_3
value: 29.694
- type: recall_at_5
value: 34.935
- type: map_at_1
value: 27.840999999999998
- type: map_at_10
value: 36.797999999999995
- type: map_at_100
value: 37.993
- type: map_at_1000
value: 38.086999999999996
- type: map_at_3
value: 34.050999999999995
- type: map_at_5
value: 35.379
- type: mrr_at_1
value: 32.649
- type: mrr_at_10
value: 41.025
- type: mrr_at_100
value: 41.878
- type: mrr_at_1000
value: 41.929
- type: mrr_at_3
value: 38.573
- type: mrr_at_5
value: 39.715
- type: ndcg_at_1
value: 32.649
- type: ndcg_at_10
value: 42.142
- type: ndcg_at_100
value: 47.558
- type: ndcg_at_1000
value: 49.643
- type: ndcg_at_3
value: 37.12
- type: ndcg_at_5
value: 38.983000000000004
- type: precision_at_1
value: 32.649
- type: precision_at_10
value: 7.08
- type: precision_at_100
value: 1.1039999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.698
- type: precision_at_5
value: 11.511000000000001
- type: recall_at_1
value: 27.840999999999998
- type: recall_at_10
value: 54.245
- type: recall_at_100
value: 77.947
- type: recall_at_1000
value: 92.36999999999999
- type: recall_at_3
value: 40.146
- type: recall_at_5
value: 44.951
- type: map_at_1
value: 26.529000000000003
- type: map_at_10
value: 35.010000000000005
- type: map_at_100
value: 36.647
- type: map_at_1000
value: 36.857
- type: map_at_3
value: 31.968000000000004
- type: map_at_5
value: 33.554
- type: mrr_at_1
value: 31.818
- type: mrr_at_10
value: 39.550999999999995
- type: mrr_at_100
value: 40.54
- type: mrr_at_1000
value: 40.596
- type: mrr_at_3
value: 36.726
- type: mrr_at_5
value: 38.416
- type: ndcg_at_1
value: 31.818
- type: ndcg_at_10
value: 40.675
- type: ndcg_at_100
value: 46.548
- type: ndcg_at_1000
value: 49.126
- type: ndcg_at_3
value: 35.829
- type: ndcg_at_5
value: 38
- type: precision_at_1
value: 31.818
- type: precision_at_10
value: 7.826
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.601
- type: precision_at_5
value: 12.095
- type: recall_at_1
value: 26.529000000000003
- type: recall_at_10
value: 51.03
- type: recall_at_100
value: 77.556
- type: recall_at_1000
value: 93.804
- type: recall_at_3
value: 36.986000000000004
- type: recall_at_5
value: 43.096000000000004
- type: map_at_1
value: 23.480999999999998
- type: map_at_10
value: 30.817
- type: map_at_100
value: 31.838
- type: map_at_1000
value: 31.932
- type: map_at_3
value: 28.011999999999997
- type: map_at_5
value: 29.668
- type: mrr_at_1
value: 25.323
- type: mrr_at_10
value: 33.072
- type: mrr_at_100
value: 33.926
- type: mrr_at_1000
value: 33.993
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 32.092
- type: ndcg_at_1
value: 25.323
- type: ndcg_at_10
value: 35.514
- type: ndcg_at_100
value: 40.489000000000004
- type: ndcg_at_1000
value: 42.908
- type: ndcg_at_3
value: 30.092000000000002
- type: ndcg_at_5
value: 32.989000000000004
- type: precision_at_1
value: 25.323
- type: precision_at_10
value: 5.545
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.446
- type: precision_at_5
value: 9.131
- type: recall_at_1
value: 23.480999999999998
- type: recall_at_10
value: 47.825
- type: recall_at_100
value: 70.652
- type: recall_at_1000
value: 88.612
- type: recall_at_3
value: 33.537
- type: recall_at_5
value: 40.542
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.333999999999998
- type: map_at_10
value: 22.524
- type: map_at_100
value: 24.506
- type: map_at_1000
value: 24.715
- type: map_at_3
value: 19.022
- type: map_at_5
value: 20.693
- type: mrr_at_1
value: 29.186
- type: mrr_at_10
value: 41.22
- type: mrr_at_100
value: 42.16
- type: mrr_at_1000
value: 42.192
- type: mrr_at_3
value: 38.013000000000005
- type: mrr_at_5
value: 39.704
- type: ndcg_at_1
value: 29.186
- type: ndcg_at_10
value: 31.167
- type: ndcg_at_100
value: 38.879000000000005
- type: ndcg_at_1000
value: 42.376000000000005
- type: ndcg_at_3
value: 25.817
- type: ndcg_at_5
value: 27.377000000000002
- type: precision_at_1
value: 29.186
- type: precision_at_10
value: 9.693999999999999
- type: precision_at_100
value: 1.8030000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 19.11
- type: precision_at_5
value: 14.344999999999999
- type: recall_at_1
value: 13.333999999999998
- type: recall_at_10
value: 37.092000000000006
- type: recall_at_100
value: 63.651
- type: recall_at_1000
value: 83.05
- type: recall_at_3
value: 23.74
- type: recall_at_5
value: 28.655
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.151
- type: map_at_10
value: 19.653000000000002
- type: map_at_100
value: 28.053
- type: map_at_1000
value: 29.709000000000003
- type: map_at_3
value: 14.191
- type: map_at_5
value: 16.456
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.4
- type: mrr_at_100
value: 74.715
- type: mrr_at_1000
value: 74.726
- type: mrr_at_3
value: 72.417
- type: mrr_at_5
value: 73.667
- type: ndcg_at_1
value: 54.25
- type: ndcg_at_10
value: 40.77
- type: ndcg_at_100
value: 46.359
- type: ndcg_at_1000
value: 54.193000000000005
- type: ndcg_at_3
value: 44.832
- type: ndcg_at_5
value: 42.63
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 32.175
- type: precision_at_100
value: 10.668
- type: precision_at_1000
value: 2.067
- type: precision_at_3
value: 47.667
- type: precision_at_5
value: 41.3
- type: recall_at_1
value: 9.151
- type: recall_at_10
value: 25.003999999999998
- type: recall_at_100
value: 52.976
- type: recall_at_1000
value: 78.315
- type: recall_at_3
value: 15.487
- type: recall_at_5
value: 18.999
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.89999999999999
- type: f1
value: 46.47777925067403
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 73.706
- type: map_at_10
value: 82.423
- type: map_at_100
value: 82.67999999999999
- type: map_at_1000
value: 82.694
- type: map_at_3
value: 81.328
- type: map_at_5
value: 82.001
- type: mrr_at_1
value: 79.613
- type: mrr_at_10
value: 87.07000000000001
- type: mrr_at_100
value: 87.169
- type: mrr_at_1000
value: 87.17
- type: mrr_at_3
value: 86.404
- type: mrr_at_5
value: 86.856
- type: ndcg_at_1
value: 79.613
- type: ndcg_at_10
value: 86.289
- type: ndcg_at_100
value: 87.201
- type: ndcg_at_1000
value: 87.428
- type: ndcg_at_3
value: 84.625
- type: ndcg_at_5
value: 85.53699999999999
- type: precision_at_1
value: 79.613
- type: precision_at_10
value: 10.399
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.473
- type: precision_at_5
value: 20.132
- type: recall_at_1
value: 73.706
- type: recall_at_10
value: 93.559
- type: recall_at_100
value: 97.188
- type: recall_at_1000
value: 98.555
- type: recall_at_3
value: 88.98700000000001
- type: recall_at_5
value: 91.373
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.841
- type: map_at_10
value: 32.643
- type: map_at_100
value: 34.575
- type: map_at_1000
value: 34.736
- type: map_at_3
value: 28.317999999999998
- type: map_at_5
value: 30.964000000000002
- type: mrr_at_1
value: 39.660000000000004
- type: mrr_at_10
value: 48.620000000000005
- type: mrr_at_100
value: 49.384
- type: mrr_at_1000
value: 49.415
- type: mrr_at_3
value: 45.988
- type: mrr_at_5
value: 47.361
- type: ndcg_at_1
value: 39.660000000000004
- type: ndcg_at_10
value: 40.646
- type: ndcg_at_100
value: 47.657
- type: ndcg_at_1000
value: 50.428
- type: ndcg_at_3
value: 36.689
- type: ndcg_at_5
value: 38.211
- type: precision_at_1
value: 39.660000000000004
- type: precision_at_10
value: 11.235000000000001
- type: precision_at_100
value: 1.8530000000000002
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.587999999999997
- type: precision_at_5
value: 18.395
- type: recall_at_1
value: 19.841
- type: recall_at_10
value: 48.135
- type: recall_at_100
value: 74.224
- type: recall_at_1000
value: 90.826
- type: recall_at_3
value: 33.536
- type: recall_at_5
value: 40.311
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.358
- type: map_at_10
value: 64.497
- type: map_at_100
value: 65.362
- type: map_at_1000
value: 65.41900000000001
- type: map_at_3
value: 61.06700000000001
- type: map_at_5
value: 63.317
- type: mrr_at_1
value: 80.716
- type: mrr_at_10
value: 86.10799999999999
- type: mrr_at_100
value: 86.265
- type: mrr_at_1000
value: 86.27
- type: mrr_at_3
value: 85.271
- type: mrr_at_5
value: 85.82499999999999
- type: ndcg_at_1
value: 80.716
- type: ndcg_at_10
value: 72.597
- type: ndcg_at_100
value: 75.549
- type: ndcg_at_1000
value: 76.61
- type: ndcg_at_3
value: 67.874
- type: ndcg_at_5
value: 70.655
- type: precision_at_1
value: 80.716
- type: precision_at_10
value: 15.148
- type: precision_at_100
value: 1.745
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 43.597
- type: precision_at_5
value: 28.351
- type: recall_at_1
value: 40.358
- type: recall_at_10
value: 75.739
- type: recall_at_100
value: 87.259
- type: recall_at_1000
value: 94.234
- type: recall_at_3
value: 65.39500000000001
- type: recall_at_5
value: 70.878
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.80799999999998
- type: ap
value: 86.81350378180757
- type: f1
value: 90.79901248314215
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.096
- type: map_at_10
value: 34.384
- type: map_at_100
value: 35.541
- type: map_at_1000
value: 35.589999999999996
- type: map_at_3
value: 30.496000000000002
- type: map_at_5
value: 32.718
- type: mrr_at_1
value: 22.750999999999998
- type: mrr_at_10
value: 35.024
- type: mrr_at_100
value: 36.125
- type: mrr_at_1000
value: 36.168
- type: mrr_at_3
value: 31.225
- type: mrr_at_5
value: 33.416000000000004
- type: ndcg_at_1
value: 22.750999999999998
- type: ndcg_at_10
value: 41.351
- type: ndcg_at_100
value: 46.92
- type: ndcg_at_1000
value: 48.111
- type: ndcg_at_3
value: 33.439
- type: ndcg_at_5
value: 37.407000000000004
- type: precision_at_1
value: 22.750999999999998
- type: precision_at_10
value: 6.564
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.288
- type: precision_at_5
value: 10.581999999999999
- type: recall_at_1
value: 22.096
- type: recall_at_10
value: 62.771
- type: recall_at_100
value: 88.529
- type: recall_at_1000
value: 97.55
- type: recall_at_3
value: 41.245
- type: recall_at_5
value: 50.788
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.16780665754673
- type: f1
value: 93.96331194859894
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.90606475148198
- type: f1
value: 58.58344986604187
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.14660390047075
- type: f1
value: 74.31533923533614
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.16139878950908
- type: f1
value: 80.18532656824924
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.949880906135085
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.56300351524862
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.196521894371315
- type: mrr
value: 32.22644231694389
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.783
- type: map_at_10
value: 14.549000000000001
- type: map_at_100
value: 18.433
- type: map_at_1000
value: 19.949
- type: map_at_3
value: 10.936
- type: map_at_5
value: 12.514
- type: mrr_at_1
value: 47.368
- type: mrr_at_10
value: 56.42
- type: mrr_at_100
value: 56.908
- type: mrr_at_1000
value: 56.95
- type: mrr_at_3
value: 54.283
- type: mrr_at_5
value: 55.568
- type: ndcg_at_1
value: 45.666000000000004
- type: ndcg_at_10
value: 37.389
- type: ndcg_at_100
value: 34.253
- type: ndcg_at_1000
value: 43.059999999999995
- type: ndcg_at_3
value: 42.725
- type: ndcg_at_5
value: 40.193
- type: precision_at_1
value: 47.368
- type: precision_at_10
value: 27.988000000000003
- type: precision_at_100
value: 8.672
- type: precision_at_1000
value: 2.164
- type: precision_at_3
value: 40.248
- type: precision_at_5
value: 34.737
- type: recall_at_1
value: 6.783
- type: recall_at_10
value: 17.838
- type: recall_at_100
value: 33.672000000000004
- type: recall_at_1000
value: 66.166
- type: recall_at_3
value: 11.849
- type: recall_at_5
value: 14.205000000000002
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.698999999999998
- type: map_at_10
value: 46.556
- type: map_at_100
value: 47.652
- type: map_at_1000
value: 47.68
- type: map_at_3
value: 42.492000000000004
- type: map_at_5
value: 44.763999999999996
- type: mrr_at_1
value: 35.747
- type: mrr_at_10
value: 49.242999999999995
- type: mrr_at_100
value: 50.052
- type: mrr_at_1000
value: 50.068
- type: mrr_at_3
value: 45.867000000000004
- type: mrr_at_5
value: 47.778999999999996
- type: ndcg_at_1
value: 35.717999999999996
- type: ndcg_at_10
value: 54.14600000000001
- type: ndcg_at_100
value: 58.672999999999995
- type: ndcg_at_1000
value: 59.279
- type: ndcg_at_3
value: 46.407
- type: ndcg_at_5
value: 50.181
- type: precision_at_1
value: 35.717999999999996
- type: precision_at_10
value: 8.844000000000001
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.993000000000002
- type: precision_at_5
value: 14.791000000000002
- type: recall_at_1
value: 31.698999999999998
- type: recall_at_10
value: 74.693
- type: recall_at_100
value: 94.15299999999999
- type: recall_at_1000
value: 98.585
- type: recall_at_3
value: 54.388999999999996
- type: recall_at_5
value: 63.08200000000001
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.283
- type: map_at_10
value: 85.24000000000001
- type: map_at_100
value: 85.882
- type: map_at_1000
value: 85.897
- type: map_at_3
value: 82.326
- type: map_at_5
value: 84.177
- type: mrr_at_1
value: 82.21000000000001
- type: mrr_at_10
value: 88.228
- type: mrr_at_100
value: 88.32
- type: mrr_at_1000
value: 88.32
- type: mrr_at_3
value: 87.323
- type: mrr_at_5
value: 87.94800000000001
- type: ndcg_at_1
value: 82.17999999999999
- type: ndcg_at_10
value: 88.9
- type: ndcg_at_100
value: 90.079
- type: ndcg_at_1000
value: 90.158
- type: ndcg_at_3
value: 86.18299999999999
- type: ndcg_at_5
value: 87.71799999999999
- type: precision_at_1
value: 82.17999999999999
- type: precision_at_10
value: 13.464
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.693
- type: precision_at_5
value: 24.792
- type: recall_at_1
value: 71.283
- type: recall_at_10
value: 95.742
- type: recall_at_100
value: 99.67200000000001
- type: recall_at_1000
value: 99.981
- type: recall_at_3
value: 87.888
- type: recall_at_5
value: 92.24
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.24267063669042
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.88056988932578
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.903
- type: map_at_10
value: 13.202
- type: map_at_100
value: 15.5
- type: map_at_1000
value: 15.870999999999999
- type: map_at_3
value: 9.407
- type: map_at_5
value: 11.238
- type: mrr_at_1
value: 24.2
- type: mrr_at_10
value: 35.867
- type: mrr_at_100
value: 37.001
- type: mrr_at_1000
value: 37.043
- type: mrr_at_3
value: 32.5
- type: mrr_at_5
value: 34.35
- type: ndcg_at_1
value: 24.2
- type: ndcg_at_10
value: 21.731
- type: ndcg_at_100
value: 30.7
- type: ndcg_at_1000
value: 36.618
- type: ndcg_at_3
value: 20.72
- type: ndcg_at_5
value: 17.954
- type: precision_at_1
value: 24.2
- type: precision_at_10
value: 11.33
- type: precision_at_100
value: 2.4410000000000003
- type: precision_at_1000
value: 0.386
- type: precision_at_3
value: 19.667
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 4.903
- type: recall_at_10
value: 22.962
- type: recall_at_100
value: 49.563
- type: recall_at_1000
value: 78.238
- type: recall_at_3
value: 11.953
- type: recall_at_5
value: 16.067999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.12694254604078
- type: cos_sim_spearman
value: 80.30141815181918
- type: euclidean_pearson
value: 81.34015449877128
- type: euclidean_spearman
value: 80.13984197010849
- type: manhattan_pearson
value: 81.31767068124086
- type: manhattan_spearman
value: 80.11720513114103
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.13112984010417
- type: cos_sim_spearman
value: 78.03063573402875
- type: euclidean_pearson
value: 83.51928418844804
- type: euclidean_spearman
value: 78.4045235411144
- type: manhattan_pearson
value: 83.49981637388689
- type: manhattan_spearman
value: 78.4042575139372
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.50327987379504
- type: cos_sim_spearman
value: 84.18556767756205
- type: euclidean_pearson
value: 82.69684424327679
- type: euclidean_spearman
value: 83.5368106038335
- type: manhattan_pearson
value: 82.57967581007374
- type: manhattan_spearman
value: 83.43009053133697
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.50756863007814
- type: cos_sim_spearman
value: 82.27204331279108
- type: euclidean_pearson
value: 81.39535251429741
- type: euclidean_spearman
value: 81.84386626336239
- type: manhattan_pearson
value: 81.34281737280695
- type: manhattan_spearman
value: 81.81149375673166
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.8727714856726
- type: cos_sim_spearman
value: 87.95738287792312
- type: euclidean_pearson
value: 86.62920602795887
- type: euclidean_spearman
value: 87.05207355381243
- type: manhattan_pearson
value: 86.53587918472225
- type: manhattan_spearman
value: 86.95382961029586
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.52240359769479
- type: cos_sim_spearman
value: 85.47685776238286
- type: euclidean_pearson
value: 84.25815333483058
- type: euclidean_spearman
value: 85.27415639683198
- type: manhattan_pearson
value: 84.29127757025637
- type: manhattan_spearman
value: 85.30226224917351
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.42501708915708
- type: cos_sim_spearman
value: 86.42276182795041
- type: euclidean_pearson
value: 86.5408207354761
- type: euclidean_spearman
value: 85.46096321750838
- type: manhattan_pearson
value: 86.54177303026881
- type: manhattan_spearman
value: 85.50313151916117
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.86521089250766
- type: cos_sim_spearman
value: 65.94868540323003
- type: euclidean_pearson
value: 67.16569626533084
- type: euclidean_spearman
value: 66.37667004134917
- type: manhattan_pearson
value: 67.1482365102333
- type: manhattan_spearman
value: 66.53240122580029
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.64746265365318
- type: cos_sim_spearman
value: 86.41888825906786
- type: euclidean_pearson
value: 85.27453642725811
- type: euclidean_spearman
value: 85.94095796602544
- type: manhattan_pearson
value: 85.28643660505334
- type: manhattan_spearman
value: 85.95028003260744
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.48903153618527
- type: mrr
value: 96.41081503826601
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.594
- type: map_at_10
value: 69.296
- type: map_at_100
value: 69.782
- type: map_at_1000
value: 69.795
- type: map_at_3
value: 66.23
- type: map_at_5
value: 68.293
- type: mrr_at_1
value: 61.667
- type: mrr_at_10
value: 70.339
- type: mrr_at_100
value: 70.708
- type: mrr_at_1000
value: 70.722
- type: mrr_at_3
value: 68
- type: mrr_at_5
value: 69.56700000000001
- type: ndcg_at_1
value: 61.667
- type: ndcg_at_10
value: 74.039
- type: ndcg_at_100
value: 76.103
- type: ndcg_at_1000
value: 76.47800000000001
- type: ndcg_at_3
value: 68.967
- type: ndcg_at_5
value: 71.96900000000001
- type: precision_at_1
value: 61.667
- type: precision_at_10
value: 9.866999999999999
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.111
- type: precision_at_5
value: 18.2
- type: recall_at_1
value: 58.594
- type: recall_at_10
value: 87.422
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 74.217
- type: recall_at_5
value: 81.539
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85049504950496
- type: cos_sim_ap
value: 96.33111544137081
- type: cos_sim_f1
value: 92.35443037974684
- type: cos_sim_precision
value: 93.53846153846153
- type: cos_sim_recall
value: 91.2
- type: dot_accuracy
value: 99.82376237623762
- type: dot_ap
value: 95.38082527310888
- type: dot_f1
value: 90.90909090909092
- type: dot_precision
value: 92.90187891440502
- type: dot_recall
value: 89
- type: euclidean_accuracy
value: 99.84851485148515
- type: euclidean_ap
value: 96.32316003996347
- type: euclidean_f1
value: 92.2071392659628
- type: euclidean_precision
value: 92.71991911021233
- type: euclidean_recall
value: 91.7
- type: manhattan_accuracy
value: 99.84851485148515
- type: manhattan_ap
value: 96.3655668249217
- type: manhattan_f1
value: 92.18356026222895
- type: manhattan_precision
value: 92.98067141403867
- type: manhattan_recall
value: 91.4
- type: max_accuracy
value: 99.85049504950496
- type: max_ap
value: 96.3655668249217
- type: max_f1
value: 92.35443037974684
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.94861371629051
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.009430451385
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.61164066427969
- type: mrr
value: 55.49710603938544
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.622620124907662
- type: cos_sim_spearman
value: 31.0678351356163
- type: dot_pearson
value: 30.863727693306814
- type: dot_spearman
value: 31.230306567021255
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 2.011
- type: map_at_100
value: 10.974
- type: map_at_1000
value: 25.819
- type: map_at_3
value: 0.6649999999999999
- type: map_at_5
value: 1.076
- type: mrr_at_1
value: 86
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 82
- type: ndcg_at_10
value: 78.07300000000001
- type: ndcg_at_100
value: 58.231
- type: ndcg_at_1000
value: 51.153000000000006
- type: ndcg_at_3
value: 81.123
- type: ndcg_at_5
value: 81.059
- type: precision_at_1
value: 86
- type: precision_at_10
value: 83
- type: precision_at_100
value: 59.38
- type: precision_at_1000
value: 22.55
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 86.8
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 14.069
- type: recall_at_1000
value: 47.678
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.161
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.809
- type: map_at_10
value: 10.394
- type: map_at_100
value: 16.598
- type: map_at_1000
value: 18.142
- type: map_at_3
value: 5.572
- type: map_at_5
value: 7.1370000000000005
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 46.564
- type: mrr_at_100
value: 47.469
- type: mrr_at_1000
value: 47.469
- type: mrr_at_3
value: 42.177
- type: mrr_at_5
value: 44.524
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 25.701
- type: ndcg_at_100
value: 37.532
- type: ndcg_at_1000
value: 48.757
- type: ndcg_at_3
value: 28.199999999999996
- type: ndcg_at_5
value: 25.987
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.9799999999999995
- type: precision_at_1000
value: 1.5350000000000001
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.809
- type: recall_at_10
value: 16.887
- type: recall_at_100
value: 48.67
- type: recall_at_1000
value: 82.89699999999999
- type: recall_at_3
value: 6.521000000000001
- type: recall_at_5
value: 9.609
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.57860000000001
- type: ap
value: 13.82629211536393
- type: f1
value: 54.59860966183956
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.38030560271647
- type: f1
value: 59.69685552567865
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.4736717043405
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.92853311080646
- type: cos_sim_ap
value: 77.67872502591382
- type: cos_sim_f1
value: 70.33941236068895
- type: cos_sim_precision
value: 67.63273258645884
- type: cos_sim_recall
value: 73.27176781002639
- type: dot_accuracy
value: 85.79603027954938
- type: dot_ap
value: 73.73786190233379
- type: dot_f1
value: 67.3437901774235
- type: dot_precision
value: 65.67201604814443
- type: dot_recall
value: 69.10290237467018
- type: euclidean_accuracy
value: 86.94045419324074
- type: euclidean_ap
value: 77.6687791535167
- type: euclidean_f1
value: 70.47209214023542
- type: euclidean_precision
value: 67.7207492094381
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.87488823985218
- type: manhattan_ap
value: 77.63373392430728
- type: manhattan_f1
value: 70.40920716112532
- type: manhattan_precision
value: 68.31265508684864
- type: manhattan_recall
value: 72.63852242744063
- type: max_accuracy
value: 86.94045419324074
- type: max_ap
value: 77.67872502591382
- type: max_f1
value: 70.47209214023542
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.67155664221679
- type: cos_sim_ap
value: 85.64591703003417
- type: cos_sim_f1
value: 77.59531005352656
- type: cos_sim_precision
value: 73.60967184801382
- type: cos_sim_recall
value: 82.03726516784724
- type: dot_accuracy
value: 88.41541506578181
- type: dot_ap
value: 84.6482788957769
- type: dot_f1
value: 77.04748541466657
- type: dot_precision
value: 74.02440754931176
- type: dot_recall
value: 80.3279950723745
- type: euclidean_accuracy
value: 88.63080684596576
- type: euclidean_ap
value: 85.44570045321562
- type: euclidean_f1
value: 77.28769403336106
- type: euclidean_precision
value: 72.90600040958427
- type: euclidean_recall
value: 82.22975053895904
- type: manhattan_accuracy
value: 88.59393798269105
- type: manhattan_ap
value: 85.40271361038187
- type: manhattan_f1
value: 77.17606419344392
- type: manhattan_precision
value: 72.4447747078295
- type: manhattan_recall
value: 82.5685247921158
- type: max_accuracy
value: 88.67155664221679
- type: max_ap
value: 85.64591703003417
- type: max_f1
value: 77.59531005352656
---
***See Disclaimer below***
----
# A Teradata Vantage compatible Embeddings Model
# BAAI/bge-base-en-v1.5
## Overview of this Model
An Embedding Model which maps text (sentence/ paragraphs) into a vector. The [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) model well known for its effectiveness in capturing semantic meanings in text data. It's a state-of-the-art model trained on a large corpus, capable of generating high-quality text embeddings.
- 109.48M params (Sizes in ONNX format - "fp32": 415.72MB, "int8": 104.75MB, "uint8": 104.75MB)
- 512 maximum input tokens
- 768 dimensions of output vector
- Licence: mit. The released models can be used for commercial purposes free of charge.
- Reference to Original Model: https://huggingface.co/BAAI/bge-base-en-v1.5
## Quickstart: Deploying this Model in Teradata Vantage
We have pre-converted the model into the ONNX format compatible with BYOM 6.0, eliminating the need for manual conversion.
**Note:** Ensure you have access to a Teradata Database with BYOM 6.0 installed.
To get started, clone the pre-converted model directly from the Teradata HuggingFace repository.
```python
import teradataml as tdml
import getpass
from huggingface_hub import hf_hub_download
model_name = "bge-base-en-v1.5"
number_dimensions_output = 768
model_file_name = "model.onnx"
# Step 1: Download Model from Teradata HuggingFace Page
hf_hub_download(repo_id=f"Teradata/{model_name}", filename=f"onnx/{model_file_name}", local_dir="./")
hf_hub_download(repo_id=f"Teradata/{model_name}", filename=f"tokenizer.json", local_dir="./")
# Step 2: Create Connection to Vantage
tdml.create_context(host = input('enter your hostname'),
username=input('enter your username'),
password = getpass.getpass("enter your password"))
# Step 3: Load Models into Vantage
# a) Embedding model
tdml.save_byom(model_id = model_name, # must be unique in the models table
model_file = f"onnx/{model_file_name}",
table_name = 'embeddings_models' )
# b) Tokenizer
tdml.save_byom(model_id = model_name, # must be unique in the models table
model_file = 'tokenizer.json',
table_name = 'embeddings_tokenizers')
# Step 4: Test ONNXEmbeddings Function
# Note that ONNXEmbeddings expects the 'payload' column to be 'txt'.
# If it has got a different name, just rename it in a subquery/CTE.
input_table = "emails.emails"
embeddings_query = f"""
SELECT
*
from mldb.ONNXEmbeddings(
on {input_table} as InputTable
on (select * from embeddings_models where model_id = '{model_name}') as ModelTable DIMENSION
on (select model as tokenizer from embeddings_tokenizers where model_id = '{model_name}') as TokenizerTable DIMENSION
using
Accumulate('id', 'txt')
ModelOutputTensor('sentence_embedding')
EnableMemoryCheck('false')
OutputFormat('FLOAT32({number_dimensions_output})')
OverwriteCachedModel('true')
) a
"""
DF_embeddings = tdml.DataFrame.from_query(embeddings_query)
DF_embeddings
```
## What Can I Do with the Embeddings?
Teradata Vantage includes pre-built in-database functions to process embeddings further. Explore the following examples:
- **Semantic Clustering with TD_KMeans:** [Semantic Clustering Python Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/Semantic_Clustering_Python.ipynb)
- **Semantic Distance with TD_VectorDistance:** [Semantic Similarity Python Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/Semantic_Similarity_Python.ipynb)
- **RAG-Based Application with TD_VectorDistance:** [RAG and Bedrock Query PDF Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/RAG_and_Bedrock_QueryPDF.ipynb)
## Deep Dive into Model Conversion to ONNX
**The steps below outline how we converted the open-source Hugging Face model into an ONNX file compatible with the in-database ONNXEmbeddings function.**
You do not need to perform these steps—they are provided solely for documentation and transparency. However, they may be helpful if you wish to convert another model to the required format.
### Part 1. Importing and Converting Model using optimum
We start by importing the pre-trained [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) model from Hugging Face.
To enhance performance and ensure compatibility with various execution environments, we'll use the [Optimum](https://github.com/huggingface/optimum) utility to convert the model into the ONNX (Open Neural Network Exchange) format.
After conversion to ONNX, we are fixing the opset in the ONNX file for compatibility with ONNX runtime used in Teradata Vantage
We are generating ONNX files for multiple different precisions: fp32, int8, uint8
You can find the detailed conversion steps in the file [convert.py](./convert.py)
### Part 2. Running the model in Python with onnxruntime & compare results
Once the fixes are applied, we proceed to test the correctness of the ONNX model by calculating cosine similarity between two texts using native SentenceTransformers and ONNX runtime, comparing the results.
If the results are identical, it confirms that the ONNX model gives the same result as the native models, validating its correctness and suitability for further use in the database.
```python
import onnxruntime as rt
from sentence_transformers.util import cos_sim
from sentence_transformers import SentenceTransformer
import transformers
sentences_1 = 'How is the weather today?'
sentences_2 = 'What is the current weather like today?'
# Calculate ONNX result
tokenizer = transformers.AutoTokenizer.from_pretrained("BAAI/bge-base-en-v1.5")
predef_sess = rt.InferenceSession("onnx/model.onnx")
enc1 = tokenizer(sentences_1)
embeddings_1_onnx = predef_sess.run(None, {"input_ids": [enc1.input_ids],
"attention_mask": [enc1.attention_mask]})
enc2 = tokenizer(sentences_2)
embeddings_2_onnx = predef_sess.run(None, {"input_ids": [enc2.input_ids],
"attention_mask": [enc2.attention_mask]})
# Calculate embeddings with SentenceTransformer
model = SentenceTransformer(model_id, trust_remote_code=True)
embeddings_1_sentence_transformer = model.encode(sentences_1, normalize_embeddings=True, trust_remote_code=True)
embeddings_2_sentence_transformer = model.encode(sentences_2, normalize_embeddings=True, trust_remote_code=True)
# Compare results
print("Cosine similiarity for embeddings calculated with ONNX:" + str(cos_sim(embeddings_1_onnx[1][0], embeddings_2_onnx[1][0])))
print("Cosine similiarity for embeddings calculated with SentenceTransformer:" + str(cos_sim(embeddings_1_sentence_transformer, embeddings_2_sentence_transformer)))
```
You can find the detailed ONNX vs. SentenceTransformer result comparison steps in the file [test_local.py](./test_local.py)
----
DISCLAIMER: The content herein (“Content”) is provided “AS IS” and is not covered by any Teradata Operations, Inc. and its affiliates (“Teradata”) agreements. Its listing here does not constitute certification or endorsement by Teradata.
To the extent any of the Content contains or is related to any artificial intelligence (“AI”) or other language learning models (“Models”) that interoperate with the products and services of Teradata, by accessing, bringing, deploying or using such Models, you acknowledge and agree that you are solely responsible for ensuring compliance with all applicable laws, regulations, and restrictions governing the use, deployment, and distribution of AI technologies. This includes, but is not limited to, AI Diffusion Rules, European Union AI Act, AI-related laws and regulations, privacy laws, export controls, and financial or sector-specific regulations.
While Teradata may provide support, guidance, or assistance in the deployment or implementation of Models to interoperate with Teradata’s products and/or services, you remain fully responsible for ensuring that your Models, data, and applications comply with all relevant legal and regulatory obligations. Our assistance does not constitute legal or regulatory approval, and Teradata disclaims any liability arising from non-compliance with applicable laws.
You must determine the suitability of the Models for any purpose. Given the probabilistic nature of machine learning and modeling, the use of the Models may in some situations result in incorrect output that does not accurately reflect the action generated. You should evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output. | [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Muennighoff/SGPT-125M-weightedmean-msmarco-specb-bitfit | Muennighoff | sentence-similarity | [
"sentence-transformers",
"pytorch",
"gpt_neo",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646 | 1,679 | 76 | 2 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-125M-weightedmean-msmarco-specb-bitfit
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 61.23880597014926
- type: ap
value: 25.854431650388644
- type: f1
value: 55.751862762818604
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 56.88436830835117
- type: ap
value: 72.67279104379772
- type: f1
value: 54.449840243786404
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 58.27586206896551
- type: ap
value: 14.067357642500387
- type: f1
value: 48.172318518691334
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 54.64668094218415
- type: ap
value: 11.776694555054965
- type: f1
value: 44.526622834078765
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 65.401225
- type: ap
value: 60.22809958678552
- type: f1
value: 65.0251824898292
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 31.165999999999993
- type: f1
value: 30.908870050167437
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 24.79
- type: f1
value: 24.5833598854121
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 26.643999999999995
- type: f1
value: 26.39012792213563
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 26.386000000000003
- type: f1
value: 26.276867791454873
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 22.078000000000003
- type: f1
value: 21.797960290226843
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 24.274
- type: f1
value: 23.887054434822627
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 22.404
- type: map_at_10
value: 36.845
- type: map_at_100
value: 37.945
- type: map_at_1000
value: 37.966
- type: map_at_3
value: 31.78
- type: map_at_5
value: 34.608
- type: mrr_at_1
value: 22.902
- type: mrr_at_10
value: 37.034
- type: mrr_at_100
value: 38.134
- type: mrr_at_1000
value: 38.155
- type: mrr_at_3
value: 31.935000000000002
- type: mrr_at_5
value: 34.812
- type: ndcg_at_1
value: 22.404
- type: ndcg_at_10
value: 45.425
- type: ndcg_at_100
value: 50.354
- type: ndcg_at_1000
value: 50.873999999999995
- type: ndcg_at_3
value: 34.97
- type: ndcg_at_5
value: 40.081
- type: precision_at_1
value: 22.404
- type: precision_at_10
value: 7.303999999999999
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.746
- type: precision_at_5
value: 11.337
- type: recall_at_1
value: 22.404
- type: recall_at_10
value: 73.044
- type: recall_at_100
value: 95.092
- type: recall_at_1000
value: 99.075
- type: recall_at_3
value: 44.239
- type: recall_at_5
value: 56.686
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 39.70858340673288
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 28.242847713721048
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 55.83700395192393
- type: mrr
value: 70.3891307215407
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 79.25366801756223
- type: cos_sim_spearman
value: 75.20954502580506
- type: euclidean_pearson
value: 78.79900722991617
- type: euclidean_spearman
value: 77.79996549607588
- type: manhattan_pearson
value: 78.18408109480399
- type: manhattan_spearman
value: 76.85958262303106
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 77.70454545454545
- type: f1
value: 77.6929000113803
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 33.63260395543984
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 27.038042665369925
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 22.139
- type: map_at_10
value: 28.839
- type: map_at_100
value: 30.023
- type: map_at_1000
value: 30.153000000000002
- type: map_at_3
value: 26.521
- type: map_at_5
value: 27.775
- type: mrr_at_1
value: 26.466
- type: mrr_at_10
value: 33.495000000000005
- type: mrr_at_100
value: 34.416999999999994
- type: mrr_at_1000
value: 34.485
- type: mrr_at_3
value: 31.402
- type: mrr_at_5
value: 32.496
- type: ndcg_at_1
value: 26.466
- type: ndcg_at_10
value: 33.372
- type: ndcg_at_100
value: 38.7
- type: ndcg_at_1000
value: 41.696
- type: ndcg_at_3
value: 29.443
- type: ndcg_at_5
value: 31.121
- type: precision_at_1
value: 26.466
- type: precision_at_10
value: 6.037
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 13.782
- type: precision_at_5
value: 9.757
- type: recall_at_1
value: 22.139
- type: recall_at_10
value: 42.39
- type: recall_at_100
value: 65.427
- type: recall_at_1000
value: 86.04899999999999
- type: recall_at_3
value: 31.127
- type: recall_at_5
value: 35.717999999999996
- type: map_at_1
value: 20.652
- type: map_at_10
value: 27.558
- type: map_at_100
value: 28.473
- type: map_at_1000
value: 28.577
- type: map_at_3
value: 25.402
- type: map_at_5
value: 26.68
- type: mrr_at_1
value: 25.223000000000003
- type: mrr_at_10
value: 31.966
- type: mrr_at_100
value: 32.664
- type: mrr_at_1000
value: 32.724
- type: mrr_at_3
value: 30.074
- type: mrr_at_5
value: 31.249
- type: ndcg_at_1
value: 25.223000000000003
- type: ndcg_at_10
value: 31.694
- type: ndcg_at_100
value: 35.662
- type: ndcg_at_1000
value: 38.092
- type: ndcg_at_3
value: 28.294000000000004
- type: ndcg_at_5
value: 30.049
- type: precision_at_1
value: 25.223000000000003
- type: precision_at_10
value: 5.777
- type: precision_at_100
value: 0.9730000000000001
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 13.397
- type: precision_at_5
value: 9.605
- type: recall_at_1
value: 20.652
- type: recall_at_10
value: 39.367999999999995
- type: recall_at_100
value: 56.485
- type: recall_at_1000
value: 73.292
- type: recall_at_3
value: 29.830000000000002
- type: recall_at_5
value: 34.43
- type: map_at_1
value: 25.180000000000003
- type: map_at_10
value: 34.579
- type: map_at_100
value: 35.589999999999996
- type: map_at_1000
value: 35.68
- type: map_at_3
value: 31.735999999999997
- type: map_at_5
value: 33.479
- type: mrr_at_1
value: 29.467
- type: mrr_at_10
value: 37.967
- type: mrr_at_100
value: 38.800000000000004
- type: mrr_at_1000
value: 38.858
- type: mrr_at_3
value: 35.465
- type: mrr_at_5
value: 37.057
- type: ndcg_at_1
value: 29.467
- type: ndcg_at_10
value: 39.796
- type: ndcg_at_100
value: 44.531
- type: ndcg_at_1000
value: 46.666000000000004
- type: ndcg_at_3
value: 34.676
- type: ndcg_at_5
value: 37.468
- type: precision_at_1
value: 29.467
- type: precision_at_10
value: 6.601999999999999
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.568999999999999
- type: precision_at_5
value: 11.172
- type: recall_at_1
value: 25.180000000000003
- type: recall_at_10
value: 52.269
- type: recall_at_100
value: 73.574
- type: recall_at_1000
value: 89.141
- type: recall_at_3
value: 38.522
- type: recall_at_5
value: 45.323
- type: map_at_1
value: 16.303
- type: map_at_10
value: 21.629
- type: map_at_100
value: 22.387999999999998
- type: map_at_1000
value: 22.489
- type: map_at_3
value: 19.608
- type: map_at_5
value: 20.774
- type: mrr_at_1
value: 17.740000000000002
- type: mrr_at_10
value: 23.214000000000002
- type: mrr_at_100
value: 23.97
- type: mrr_at_1000
value: 24.054000000000002
- type: mrr_at_3
value: 21.243000000000002
- type: mrr_at_5
value: 22.322
- type: ndcg_at_1
value: 17.740000000000002
- type: ndcg_at_10
value: 25.113000000000003
- type: ndcg_at_100
value: 29.287999999999997
- type: ndcg_at_1000
value: 32.204
- type: ndcg_at_3
value: 21.111
- type: ndcg_at_5
value: 23.061999999999998
- type: precision_at_1
value: 17.740000000000002
- type: precision_at_10
value: 3.955
- type: precision_at_100
value: 0.644
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 8.851
- type: precision_at_5
value: 6.418
- type: recall_at_1
value: 16.303
- type: recall_at_10
value: 34.487
- type: recall_at_100
value: 54.413999999999994
- type: recall_at_1000
value: 77.158
- type: recall_at_3
value: 23.733
- type: recall_at_5
value: 28.381
- type: map_at_1
value: 10.133000000000001
- type: map_at_10
value: 15.665999999999999
- type: map_at_100
value: 16.592000000000002
- type: map_at_1000
value: 16.733999999999998
- type: map_at_3
value: 13.625000000000002
- type: map_at_5
value: 14.721
- type: mrr_at_1
value: 12.562000000000001
- type: mrr_at_10
value: 18.487000000000002
- type: mrr_at_100
value: 19.391
- type: mrr_at_1000
value: 19.487
- type: mrr_at_3
value: 16.418
- type: mrr_at_5
value: 17.599999999999998
- type: ndcg_at_1
value: 12.562000000000001
- type: ndcg_at_10
value: 19.43
- type: ndcg_at_100
value: 24.546
- type: ndcg_at_1000
value: 28.193
- type: ndcg_at_3
value: 15.509999999999998
- type: ndcg_at_5
value: 17.322000000000003
- type: precision_at_1
value: 12.562000000000001
- type: precision_at_10
value: 3.794
- type: precision_at_100
value: 0.74
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 7.546
- type: precision_at_5
value: 5.721
- type: recall_at_1
value: 10.133000000000001
- type: recall_at_10
value: 28.261999999999997
- type: recall_at_100
value: 51.742999999999995
- type: recall_at_1000
value: 78.075
- type: recall_at_3
value: 17.634
- type: recall_at_5
value: 22.128999999999998
- type: map_at_1
value: 19.991999999999997
- type: map_at_10
value: 27.346999999999998
- type: map_at_100
value: 28.582
- type: map_at_1000
value: 28.716
- type: map_at_3
value: 24.907
- type: map_at_5
value: 26.1
- type: mrr_at_1
value: 23.773
- type: mrr_at_10
value: 31.647
- type: mrr_at_100
value: 32.639
- type: mrr_at_1000
value: 32.706
- type: mrr_at_3
value: 29.195
- type: mrr_at_5
value: 30.484
- type: ndcg_at_1
value: 23.773
- type: ndcg_at_10
value: 32.322
- type: ndcg_at_100
value: 37.996
- type: ndcg_at_1000
value: 40.819
- type: ndcg_at_3
value: 27.876
- type: ndcg_at_5
value: 29.664
- type: precision_at_1
value: 23.773
- type: precision_at_10
value: 5.976999999999999
- type: precision_at_100
value: 1.055
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 13.122
- type: precision_at_5
value: 9.451
- type: recall_at_1
value: 19.991999999999997
- type: recall_at_10
value: 43.106
- type: recall_at_100
value: 67.264
- type: recall_at_1000
value: 86.386
- type: recall_at_3
value: 30.392000000000003
- type: recall_at_5
value: 34.910999999999994
- type: map_at_1
value: 17.896
- type: map_at_10
value: 24.644
- type: map_at_100
value: 25.790000000000003
- type: map_at_1000
value: 25.913999999999998
- type: map_at_3
value: 22.694
- type: map_at_5
value: 23.69
- type: mrr_at_1
value: 21.346999999999998
- type: mrr_at_10
value: 28.594
- type: mrr_at_100
value: 29.543999999999997
- type: mrr_at_1000
value: 29.621
- type: mrr_at_3
value: 26.807
- type: mrr_at_5
value: 27.669
- type: ndcg_at_1
value: 21.346999999999998
- type: ndcg_at_10
value: 28.833
- type: ndcg_at_100
value: 34.272000000000006
- type: ndcg_at_1000
value: 37.355
- type: ndcg_at_3
value: 25.373
- type: ndcg_at_5
value: 26.756
- type: precision_at_1
value: 21.346999999999998
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.954
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 11.948
- type: precision_at_5
value: 8.425
- type: recall_at_1
value: 17.896
- type: recall_at_10
value: 37.291000000000004
- type: recall_at_100
value: 61.138000000000005
- type: recall_at_1000
value: 83.212
- type: recall_at_3
value: 27.705999999999996
- type: recall_at_5
value: 31.234
- type: map_at_1
value: 17.195166666666665
- type: map_at_10
value: 23.329083333333333
- type: map_at_100
value: 24.30308333333333
- type: map_at_1000
value: 24.422416666666667
- type: map_at_3
value: 21.327416666666664
- type: map_at_5
value: 22.419999999999998
- type: mrr_at_1
value: 19.999916666666667
- type: mrr_at_10
value: 26.390166666666666
- type: mrr_at_100
value: 27.230999999999998
- type: mrr_at_1000
value: 27.308333333333334
- type: mrr_at_3
value: 24.4675
- type: mrr_at_5
value: 25.541083333333336
- type: ndcg_at_1
value: 19.999916666666667
- type: ndcg_at_10
value: 27.248666666666665
- type: ndcg_at_100
value: 32.00258333333334
- type: ndcg_at_1000
value: 34.9465
- type: ndcg_at_3
value: 23.58566666666667
- type: ndcg_at_5
value: 25.26341666666666
- type: precision_at_1
value: 19.999916666666667
- type: precision_at_10
value: 4.772166666666666
- type: precision_at_100
value: 0.847
- type: precision_at_1000
value: 0.12741666666666668
- type: precision_at_3
value: 10.756166666666669
- type: precision_at_5
value: 7.725416666666667
- type: recall_at_1
value: 17.195166666666665
- type: recall_at_10
value: 35.99083333333334
- type: recall_at_100
value: 57.467999999999996
- type: recall_at_1000
value: 78.82366666666667
- type: recall_at_3
value: 25.898499999999995
- type: recall_at_5
value: 30.084333333333333
- type: map_at_1
value: 16.779
- type: map_at_10
value: 21.557000000000002
- type: map_at_100
value: 22.338
- type: map_at_1000
value: 22.421
- type: map_at_3
value: 19.939
- type: map_at_5
value: 20.903
- type: mrr_at_1
value: 18.404999999999998
- type: mrr_at_10
value: 23.435
- type: mrr_at_100
value: 24.179000000000002
- type: mrr_at_1000
value: 24.25
- type: mrr_at_3
value: 21.907
- type: mrr_at_5
value: 22.781000000000002
- type: ndcg_at_1
value: 18.404999999999998
- type: ndcg_at_10
value: 24.515
- type: ndcg_at_100
value: 28.721000000000004
- type: ndcg_at_1000
value: 31.259999999999998
- type: ndcg_at_3
value: 21.508
- type: ndcg_at_5
value: 23.01
- type: precision_at_1
value: 18.404999999999998
- type: precision_at_10
value: 3.834
- type: precision_at_100
value: 0.641
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 9.151
- type: precision_at_5
value: 6.503
- type: recall_at_1
value: 16.779
- type: recall_at_10
value: 31.730000000000004
- type: recall_at_100
value: 51.673
- type: recall_at_1000
value: 71.17599999999999
- type: recall_at_3
value: 23.518
- type: recall_at_5
value: 27.230999999999998
- type: map_at_1
value: 9.279
- type: map_at_10
value: 13.822000000000001
- type: map_at_100
value: 14.533
- type: map_at_1000
value: 14.649999999999999
- type: map_at_3
value: 12.396
- type: map_at_5
value: 13.214
- type: mrr_at_1
value: 11.149000000000001
- type: mrr_at_10
value: 16.139
- type: mrr_at_100
value: 16.872
- type: mrr_at_1000
value: 16.964000000000002
- type: mrr_at_3
value: 14.613000000000001
- type: mrr_at_5
value: 15.486
- type: ndcg_at_1
value: 11.149000000000001
- type: ndcg_at_10
value: 16.82
- type: ndcg_at_100
value: 20.73
- type: ndcg_at_1000
value: 23.894000000000002
- type: ndcg_at_3
value: 14.11
- type: ndcg_at_5
value: 15.404000000000002
- type: precision_at_1
value: 11.149000000000001
- type: precision_at_10
value: 3.063
- type: precision_at_100
value: 0.587
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 6.699
- type: precision_at_5
value: 4.928
- type: recall_at_1
value: 9.279
- type: recall_at_10
value: 23.745
- type: recall_at_100
value: 41.873
- type: recall_at_1000
value: 64.982
- type: recall_at_3
value: 16.152
- type: recall_at_5
value: 19.409000000000002
- type: map_at_1
value: 16.36
- type: map_at_10
value: 21.927
- type: map_at_100
value: 22.889
- type: map_at_1000
value: 22.994
- type: map_at_3
value: 20.433
- type: map_at_5
value: 21.337
- type: mrr_at_1
value: 18.75
- type: mrr_at_10
value: 24.859
- type: mrr_at_100
value: 25.746999999999996
- type: mrr_at_1000
value: 25.829
- type: mrr_at_3
value: 23.383000000000003
- type: mrr_at_5
value: 24.297
- type: ndcg_at_1
value: 18.75
- type: ndcg_at_10
value: 25.372
- type: ndcg_at_100
value: 30.342999999999996
- type: ndcg_at_1000
value: 33.286
- type: ndcg_at_3
value: 22.627
- type: ndcg_at_5
value: 24.04
- type: precision_at_1
value: 18.75
- type: precision_at_10
value: 4.1419999999999995
- type: precision_at_100
value: 0.738
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 10.261000000000001
- type: precision_at_5
value: 7.164
- type: recall_at_1
value: 16.36
- type: recall_at_10
value: 32.949
- type: recall_at_100
value: 55.552
- type: recall_at_1000
value: 77.09899999999999
- type: recall_at_3
value: 25.538
- type: recall_at_5
value: 29.008
- type: map_at_1
value: 17.39
- type: map_at_10
value: 23.058
- type: map_at_100
value: 24.445
- type: map_at_1000
value: 24.637999999999998
- type: map_at_3
value: 21.037
- type: map_at_5
value: 21.966
- type: mrr_at_1
value: 19.96
- type: mrr_at_10
value: 26.301000000000002
- type: mrr_at_100
value: 27.297
- type: mrr_at_1000
value: 27.375
- type: mrr_at_3
value: 24.340999999999998
- type: mrr_at_5
value: 25.339
- type: ndcg_at_1
value: 19.96
- type: ndcg_at_10
value: 27.249000000000002
- type: ndcg_at_100
value: 32.997
- type: ndcg_at_1000
value: 36.359
- type: ndcg_at_3
value: 23.519000000000002
- type: ndcg_at_5
value: 24.915000000000003
- type: precision_at_1
value: 19.96
- type: precision_at_10
value: 5.356000000000001
- type: precision_at_100
value: 1.198
- type: precision_at_1000
value: 0.20400000000000001
- type: precision_at_3
value: 10.738
- type: precision_at_5
value: 7.904999999999999
- type: recall_at_1
value: 17.39
- type: recall_at_10
value: 35.254999999999995
- type: recall_at_100
value: 61.351
- type: recall_at_1000
value: 84.395
- type: recall_at_3
value: 25.194
- type: recall_at_5
value: 28.546
- type: map_at_1
value: 14.238999999999999
- type: map_at_10
value: 19.323
- type: map_at_100
value: 19.994
- type: map_at_1000
value: 20.102999999999998
- type: map_at_3
value: 17.631
- type: map_at_5
value: 18.401
- type: mrr_at_1
value: 15.157000000000002
- type: mrr_at_10
value: 20.578
- type: mrr_at_100
value: 21.252
- type: mrr_at_1000
value: 21.346999999999998
- type: mrr_at_3
value: 18.762
- type: mrr_at_5
value: 19.713
- type: ndcg_at_1
value: 15.157000000000002
- type: ndcg_at_10
value: 22.468
- type: ndcg_at_100
value: 26.245
- type: ndcg_at_1000
value: 29.534
- type: ndcg_at_3
value: 18.981
- type: ndcg_at_5
value: 20.349999999999998
- type: precision_at_1
value: 15.157000000000002
- type: precision_at_10
value: 3.512
- type: precision_at_100
value: 0.577
- type: precision_at_1000
value: 0.091
- type: precision_at_3
value: 8.01
- type: precision_at_5
value: 5.656
- type: recall_at_1
value: 14.238999999999999
- type: recall_at_10
value: 31.038
- type: recall_at_100
value: 49.122
- type: recall_at_1000
value: 74.919
- type: recall_at_3
value: 21.436
- type: recall_at_5
value: 24.692
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 8.828
- type: map_at_10
value: 14.982000000000001
- type: map_at_100
value: 16.495
- type: map_at_1000
value: 16.658
- type: map_at_3
value: 12.366000000000001
- type: map_at_5
value: 13.655000000000001
- type: mrr_at_1
value: 19.088
- type: mrr_at_10
value: 29.29
- type: mrr_at_100
value: 30.291
- type: mrr_at_1000
value: 30.342000000000002
- type: mrr_at_3
value: 25.907000000000004
- type: mrr_at_5
value: 27.840999999999998
- type: ndcg_at_1
value: 19.088
- type: ndcg_at_10
value: 21.858
- type: ndcg_at_100
value: 28.323999999999998
- type: ndcg_at_1000
value: 31.561
- type: ndcg_at_3
value: 17.175
- type: ndcg_at_5
value: 18.869
- type: precision_at_1
value: 19.088
- type: precision_at_10
value: 6.9190000000000005
- type: precision_at_100
value: 1.376
- type: precision_at_1000
value: 0.197
- type: precision_at_3
value: 12.703999999999999
- type: precision_at_5
value: 9.993
- type: recall_at_1
value: 8.828
- type: recall_at_10
value: 27.381
- type: recall_at_100
value: 50.0
- type: recall_at_1000
value: 68.355
- type: recall_at_3
value: 16.118
- type: recall_at_5
value: 20.587
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 5.586
- type: map_at_10
value: 10.040000000000001
- type: map_at_100
value: 12.55
- type: map_at_1000
value: 13.123999999999999
- type: map_at_3
value: 7.75
- type: map_at_5
value: 8.835999999999999
- type: mrr_at_1
value: 42.25
- type: mrr_at_10
value: 51.205999999999996
- type: mrr_at_100
value: 51.818
- type: mrr_at_1000
value: 51.855
- type: mrr_at_3
value: 48.875
- type: mrr_at_5
value: 50.488
- type: ndcg_at_1
value: 32.25
- type: ndcg_at_10
value: 22.718
- type: ndcg_at_100
value: 24.359
- type: ndcg_at_1000
value: 29.232000000000003
- type: ndcg_at_3
value: 25.974000000000004
- type: ndcg_at_5
value: 24.291999999999998
- type: precision_at_1
value: 42.25
- type: precision_at_10
value: 17.75
- type: precision_at_100
value: 5.032
- type: precision_at_1000
value: 1.117
- type: precision_at_3
value: 28.833
- type: precision_at_5
value: 24.25
- type: recall_at_1
value: 5.586
- type: recall_at_10
value: 14.16
- type: recall_at_100
value: 28.051
- type: recall_at_1000
value: 45.157000000000004
- type: recall_at_3
value: 8.758000000000001
- type: recall_at_5
value: 10.975999999999999
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 39.075
- type: f1
value: 35.01420354708222
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 43.519999999999996
- type: map_at_10
value: 54.368
- type: map_at_100
value: 54.918
- type: map_at_1000
value: 54.942
- type: map_at_3
value: 51.712
- type: map_at_5
value: 53.33599999999999
- type: mrr_at_1
value: 46.955000000000005
- type: mrr_at_10
value: 58.219
- type: mrr_at_100
value: 58.73500000000001
- type: mrr_at_1000
value: 58.753
- type: mrr_at_3
value: 55.518
- type: mrr_at_5
value: 57.191
- type: ndcg_at_1
value: 46.955000000000005
- type: ndcg_at_10
value: 60.45
- type: ndcg_at_100
value: 63.047
- type: ndcg_at_1000
value: 63.712999999999994
- type: ndcg_at_3
value: 55.233
- type: ndcg_at_5
value: 58.072
- type: precision_at_1
value: 46.955000000000005
- type: precision_at_10
value: 8.267
- type: precision_at_100
value: 0.962
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 22.326999999999998
- type: precision_at_5
value: 14.940999999999999
- type: recall_at_1
value: 43.519999999999996
- type: recall_at_10
value: 75.632
- type: recall_at_100
value: 87.41600000000001
- type: recall_at_1000
value: 92.557
- type: recall_at_3
value: 61.597
- type: recall_at_5
value: 68.518
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 9.549000000000001
- type: map_at_10
value: 15.762
- type: map_at_100
value: 17.142
- type: map_at_1000
value: 17.329
- type: map_at_3
value: 13.575000000000001
- type: map_at_5
value: 14.754000000000001
- type: mrr_at_1
value: 19.753
- type: mrr_at_10
value: 26.568
- type: mrr_at_100
value: 27.606
- type: mrr_at_1000
value: 27.68
- type: mrr_at_3
value: 24.203
- type: mrr_at_5
value: 25.668999999999997
- type: ndcg_at_1
value: 19.753
- type: ndcg_at_10
value: 21.118000000000002
- type: ndcg_at_100
value: 27.308
- type: ndcg_at_1000
value: 31.304
- type: ndcg_at_3
value: 18.319
- type: ndcg_at_5
value: 19.414
- type: precision_at_1
value: 19.753
- type: precision_at_10
value: 6.08
- type: precision_at_100
value: 1.204
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 12.191
- type: precision_at_5
value: 9.383
- type: recall_at_1
value: 9.549000000000001
- type: recall_at_10
value: 26.131
- type: recall_at_100
value: 50.544999999999995
- type: recall_at_1000
value: 74.968
- type: recall_at_3
value: 16.951
- type: recall_at_5
value: 20.95
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 25.544
- type: map_at_10
value: 32.62
- type: map_at_100
value: 33.275
- type: map_at_1000
value: 33.344
- type: map_at_3
value: 30.851
- type: map_at_5
value: 31.868999999999996
- type: mrr_at_1
value: 51.087
- type: mrr_at_10
value: 57.704
- type: mrr_at_100
value: 58.175
- type: mrr_at_1000
value: 58.207
- type: mrr_at_3
value: 56.106
- type: mrr_at_5
value: 57.074000000000005
- type: ndcg_at_1
value: 51.087
- type: ndcg_at_10
value: 40.876000000000005
- type: ndcg_at_100
value: 43.762
- type: ndcg_at_1000
value: 45.423
- type: ndcg_at_3
value: 37.65
- type: ndcg_at_5
value: 39.305
- type: precision_at_1
value: 51.087
- type: precision_at_10
value: 8.304
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 22.875999999999998
- type: precision_at_5
value: 15.033
- type: recall_at_1
value: 25.544
- type: recall_at_10
value: 41.519
- type: recall_at_100
value: 52.957
- type: recall_at_1000
value: 64.132
- type: recall_at_3
value: 34.315
- type: recall_at_5
value: 37.583
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 58.6696
- type: ap
value: 55.3644880984279
- type: f1
value: 58.07942097405652
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 14.442
- type: map_at_10
value: 22.932
- type: map_at_100
value: 24.132
- type: map_at_1000
value: 24.213
- type: map_at_3
value: 20.002
- type: map_at_5
value: 21.636
- type: mrr_at_1
value: 14.841999999999999
- type: mrr_at_10
value: 23.416
- type: mrr_at_100
value: 24.593999999999998
- type: mrr_at_1000
value: 24.669
- type: mrr_at_3
value: 20.494
- type: mrr_at_5
value: 22.14
- type: ndcg_at_1
value: 14.841999999999999
- type: ndcg_at_10
value: 27.975
- type: ndcg_at_100
value: 34.143
- type: ndcg_at_1000
value: 36.370000000000005
- type: ndcg_at_3
value: 21.944
- type: ndcg_at_5
value: 24.881
- type: precision_at_1
value: 14.841999999999999
- type: precision_at_10
value: 4.537
- type: precision_at_100
value: 0.767
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 9.322
- type: precision_at_5
value: 7.074
- type: recall_at_1
value: 14.442
- type: recall_at_10
value: 43.557
- type: recall_at_100
value: 72.904
- type: recall_at_1000
value: 90.40700000000001
- type: recall_at_3
value: 27.088
- type: recall_at_5
value: 34.144000000000005
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 86.95622435020519
- type: f1
value: 86.58363130708494
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 62.73034657650043
- type: f1
value: 60.78623915840713
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 67.54503002001334
- type: f1
value: 65.34879794116112
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 65.35233322893829
- type: f1
value: 62.994001882446646
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 45.37110075295806
- type: f1
value: 44.26285860740745
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 55.276672694394215
- type: f1
value: 53.28388179869587
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 62.25262197902417
- type: f1
value: 43.44084037148853
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 49.56043956043956
- type: f1
value: 32.86333673498598
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 49.93995997331555
- type: f1
value: 34.726671876888126
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 46.32947071719386
- type: f1
value: 32.325273615982795
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 32.208676945141626
- type: f1
value: 21.32185122815139
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 43.627486437613015
- type: f1
value: 27.04872922347508
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.548083389374575
- type: f1
value: 39.490307545239716
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 24.18291862811029
- type: f1
value: 23.437620034727473
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 30.134498991257562
- type: f1
value: 28.787175191531283
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.88433086751849
- type: f1
value: 36.264500398782126
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 29.17283120376597
- type: f1
value: 27.8101616531901
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.788836583725626
- type: f1
value: 39.71413181054801
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 44.176193678547406
- type: f1
value: 42.192499826552286
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.07464694014795
- type: f1
value: 39.44188259183162
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 36.254203093476804
- type: f1
value: 34.46592715936761
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 61.40887693342301
- type: f1
value: 59.79854802683996
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.679892400807
- type: f1
value: 42.04801248338172
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.59179556153329
- type: f1
value: 34.045862930486166
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.036987222595826
- type: f1
value: 38.117703439362785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.43981170141224
- type: f1
value: 42.7084388987865
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 31.593813046402154
- type: f1
value: 29.98550522450782
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 27.044384667114997
- type: f1
value: 27.313059184832667
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.453261600538
- type: f1
value: 37.309189326110435
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 27.979152656355076
- type: f1
value: 27.430939684346445
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.97108271687963
- type: f1
value: 43.40585705688761
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.302622730329524
- type: f1
value: 39.108052180520744
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.474108944182916
- type: f1
value: 45.85950328241134
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.60860793544048
- type: f1
value: 43.94920708216737
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.668459986550104
- type: f1
value: 37.6990034018859
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.6523201075992
- type: f1
value: 25.279084273189582
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 28.295225285810353
- type: f1
value: 26.645825638771548
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 23.480161398789505
- type: f1
value: 22.275241866506732
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 36.55682582380632
- type: f1
value: 36.004753171063605
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.84936112979153
- type: f1
value: 41.38932672359119
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 24.90921318090114
- type: f1
value: 23.968687483768807
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 29.86213853396099
- type: f1
value: 29.977152075255407
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.42098184263618
- type: f1
value: 41.50877432664628
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.131136516476126
- type: f1
value: 23.938932214086776
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 39.81506388702084
- type: f1
value: 38.809586587791664
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 43.62138533960995
- type: f1
value: 42.01386842914633
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.19569603227976
- type: f1
value: 40.00556559825827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 45.20847343644923
- type: f1
value: 44.24115005029051
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.80901143241426
- type: f1
value: 40.474074848670085
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.96839273705447
- type: f1
value: 35.095456843621
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 40.60524546065905
- type: f1
value: 39.302383051500136
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.75722932078009
- type: f1
value: 41.53763931497389
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.347007397444514
- type: f1
value: 41.04366017948627
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.12306657700067
- type: f1
value: 39.712940473289024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 24.603227975790183
- type: f1
value: 23.969236788828606
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.03698722259583
- type: f1
value: 24.37196123281459
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 35.40013449899126
- type: f1
value: 35.063600413688036
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 41.19031607262945
- type: f1
value: 40.240432304273014
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 36.405514458641555
- type: f1
value: 36.03844992856558
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 25.934767989240076
- type: f1
value: 25.2074457023531
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 38.79959650302622
- type: f1
value: 37.160233794673125
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 46.244115669132476
- type: f1
value: 44.367480561291906
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 42.30665770006724
- type: f1
value: 41.9642223283514
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.2481506388702
- type: f1
value: 40.924230769590785
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 25.30262273032952
- type: f1
value: 24.937105830264066
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.07128446536651
- type: f1
value: 31.80245816594883
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.681237390719566
- type: f1
value: 36.37219042508338
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.56624075319435
- type: f1
value: 28.386042056362758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.1049092131809
- type: f1
value: 38.926150886991294
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.44384667114997
- type: f1
value: 42.578252395460005
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.211163416274374
- type: f1
value: 41.04465858304789
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.503026227303295
- type: f1
value: 34.49785095312759
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.73772696704773
- type: f1
value: 69.21759502909043
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.078681909885674
- type: f1
value: 43.05914426901129
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.61264290517821
- type: f1
value: 32.02463177462754
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.35642232683255
- type: f1
value: 38.13642481807678
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.06724949562878
- type: f1
value: 43.19827608343738
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.178883658372555
- type: f1
value: 29.979761884698775
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 26.903160726294555
- type: f1
value: 25.833010434083363
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.379959650302624
- type: f1
value: 37.93134355292882
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.375924680564896
- type: f1
value: 26.96255693013172
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.361129791526565
- type: f1
value: 43.54445012295126
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.290517821116346
- type: f1
value: 37.26982052174147
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.4694014794889
- type: f1
value: 44.060986162841566
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.25756556825824
- type: f1
value: 45.625139456758816
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.12642905178212
- type: f1
value: 39.54392378396527
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 24.72763954270343
- type: f1
value: 23.337743140804484
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.741089441829182
- type: f1
value: 27.570876190083748
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 23.850033624747816
- type: f1
value: 22.86733484540032
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.56691324815064
- type: f1
value: 35.504081677134565
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.928043039677206
- type: f1
value: 39.108589131211254
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 25.527908540685946
- type: f1
value: 25.333391622280477
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 29.105581708137183
- type: f1
value: 28.478235012692814
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 43.78614660390047
- type: f1
value: 41.9640143926267
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 27.269670477471415
- type: f1
value: 26.228386764141852
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 39.018157363819775
- type: f1
value: 37.641949339321854
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.35978480161399
- type: f1
value: 42.6851176096831
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 41.89307330195023
- type: f1
value: 40.888710642615024
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 45.901143241425686
- type: f1
value: 44.496942353920545
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.11566913248151
- type: f1
value: 41.953945105870616
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 32.76395427034297
- type: f1
value: 31.436372571600934
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 40.504371217215876
- type: f1
value: 39.322752749628165
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.51849361129792
- type: f1
value: 41.4139297118463
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.293207800941495
- type: f1
value: 40.50409536806683
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.9993275050437
- type: f1
value: 41.045416224973266
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.32548755884331
- type: f1
value: 27.276841995561867
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 26.593813046402154
- type: f1
value: 25.483878616197586
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 36.788836583725626
- type: f1
value: 34.603932909177686
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 42.5689307330195
- type: f1
value: 40.924469309079825
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 37.09482178883658
- type: f1
value: 37.949628822857164
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 28.836583725622063
- type: f1
value: 27.806558655512344
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 37.357094821788834
- type: f1
value: 37.507918961038165
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 49.37794216543375
- type: f1
value: 47.20421153697707
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 44.42165433759248
- type: f1
value: 44.34741861198931
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 31.374938993074252
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 26.871455379644093
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.402396942935333
- type: mrr
value: 31.42600938803256
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 3.7740000000000005
- type: map_at_10
value: 7.614999999999999
- type: map_at_100
value: 9.574
- type: map_at_1000
value: 10.711
- type: map_at_3
value: 5.7540000000000004
- type: map_at_5
value: 6.6659999999999995
- type: mrr_at_1
value: 33.127
- type: mrr_at_10
value: 40.351
- type: mrr_at_100
value: 41.144
- type: mrr_at_1000
value: 41.202
- type: mrr_at_3
value: 38.029
- type: mrr_at_5
value: 39.190000000000005
- type: ndcg_at_1
value: 31.579
- type: ndcg_at_10
value: 22.792
- type: ndcg_at_100
value: 21.698999999999998
- type: ndcg_at_1000
value: 30.892999999999997
- type: ndcg_at_3
value: 26.828999999999997
- type: ndcg_at_5
value: 25.119000000000003
- type: precision_at_1
value: 33.127
- type: precision_at_10
value: 16.718
- type: precision_at_100
value: 5.7090000000000005
- type: precision_at_1000
value: 1.836
- type: precision_at_3
value: 24.768
- type: precision_at_5
value: 21.3
- type: recall_at_1
value: 3.7740000000000005
- type: recall_at_10
value: 10.302999999999999
- type: recall_at_100
value: 23.013
- type: recall_at_1000
value: 54.864999999999995
- type: recall_at_3
value: 6.554
- type: recall_at_5
value: 8.087
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 15.620999999999999
- type: map_at_10
value: 24.519
- type: map_at_100
value: 25.586
- type: map_at_1000
value: 25.662000000000003
- type: map_at_3
value: 21.619
- type: map_at_5
value: 23.232
- type: mrr_at_1
value: 17.497
- type: mrr_at_10
value: 26.301000000000002
- type: mrr_at_100
value: 27.235
- type: mrr_at_1000
value: 27.297
- type: mrr_at_3
value: 23.561
- type: mrr_at_5
value: 25.111
- type: ndcg_at_1
value: 17.497
- type: ndcg_at_10
value: 29.725
- type: ndcg_at_100
value: 34.824
- type: ndcg_at_1000
value: 36.907000000000004
- type: ndcg_at_3
value: 23.946
- type: ndcg_at_5
value: 26.739
- type: precision_at_1
value: 17.497
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.8099999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 11.114
- type: precision_at_5
value: 8.285
- type: recall_at_1
value: 15.620999999999999
- type: recall_at_10
value: 43.999
- type: recall_at_100
value: 67.183
- type: recall_at_1000
value: 83.174
- type: recall_at_3
value: 28.720000000000002
- type: recall_at_5
value: 35.154
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 54.717000000000006
- type: map_at_10
value: 67.514
- type: map_at_100
value: 68.484
- type: map_at_1000
value: 68.523
- type: map_at_3
value: 64.169
- type: map_at_5
value: 66.054
- type: mrr_at_1
value: 62.46000000000001
- type: mrr_at_10
value: 71.503
- type: mrr_at_100
value: 71.91499999999999
- type: mrr_at_1000
value: 71.923
- type: mrr_at_3
value: 69.46799999999999
- type: mrr_at_5
value: 70.677
- type: ndcg_at_1
value: 62.480000000000004
- type: ndcg_at_10
value: 72.98
- type: ndcg_at_100
value: 76.023
- type: ndcg_at_1000
value: 76.512
- type: ndcg_at_3
value: 68.138
- type: ndcg_at_5
value: 70.458
- type: precision_at_1
value: 62.480000000000004
- type: precision_at_10
value: 11.373
- type: precision_at_100
value: 1.437
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 29.622999999999998
- type: precision_at_5
value: 19.918
- type: recall_at_1
value: 54.717000000000006
- type: recall_at_10
value: 84.745
- type: recall_at_100
value: 96.528
- type: recall_at_1000
value: 99.39
- type: recall_at_3
value: 71.60600000000001
- type: recall_at_5
value: 77.511
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 40.23390747226228
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 49.090518272935626
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 3.028
- type: map_at_10
value: 6.968000000000001
- type: map_at_100
value: 8.200000000000001
- type: map_at_1000
value: 8.432
- type: map_at_3
value: 5.3069999999999995
- type: map_at_5
value: 6.099
- type: mrr_at_1
value: 14.799999999999999
- type: mrr_at_10
value: 22.425
- type: mrr_at_100
value: 23.577
- type: mrr_at_1000
value: 23.669999999999998
- type: mrr_at_3
value: 20.233
- type: mrr_at_5
value: 21.318
- type: ndcg_at_1
value: 14.799999999999999
- type: ndcg_at_10
value: 12.206
- type: ndcg_at_100
value: 17.799
- type: ndcg_at_1000
value: 22.891000000000002
- type: ndcg_at_3
value: 12.128
- type: ndcg_at_5
value: 10.212
- type: precision_at_1
value: 14.799999999999999
- type: precision_at_10
value: 6.17
- type: precision_at_100
value: 1.428
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 11.333
- type: precision_at_5
value: 8.74
- type: recall_at_1
value: 3.028
- type: recall_at_10
value: 12.522
- type: recall_at_100
value: 28.975
- type: recall_at_1000
value: 54.038
- type: recall_at_3
value: 6.912999999999999
- type: recall_at_5
value: 8.883000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 76.62983928119752
- type: cos_sim_spearman
value: 65.92910683118656
- type: euclidean_pearson
value: 71.10290039690963
- type: euclidean_spearman
value: 64.80076622426652
- type: manhattan_pearson
value: 70.8944726230188
- type: manhattan_spearman
value: 64.75082576033986
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 74.42679147085553
- type: cos_sim_spearman
value: 66.52980061546658
- type: euclidean_pearson
value: 74.87039477408763
- type: euclidean_spearman
value: 70.63397666902786
- type: manhattan_pearson
value: 74.97015137513088
- type: manhattan_spearman
value: 70.75951355434326
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 75.62472426599543
- type: cos_sim_spearman
value: 76.1662886374236
- type: euclidean_pearson
value: 76.3297128081315
- type: euclidean_spearman
value: 77.19385151966563
- type: manhattan_pearson
value: 76.50363291423257
- type: manhattan_spearman
value: 77.37081896355399
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 74.48227705407035
- type: cos_sim_spearman
value: 69.04572664009687
- type: euclidean_pearson
value: 71.76138185714849
- type: euclidean_spearman
value: 68.93415452043307
- type: manhattan_pearson
value: 71.68010915543306
- type: manhattan_spearman
value: 68.99176321262806
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 78.1566527175902
- type: cos_sim_spearman
value: 79.23677712825851
- type: euclidean_pearson
value: 76.29138438696417
- type: euclidean_spearman
value: 77.20108266215374
- type: manhattan_pearson
value: 76.27464935799118
- type: manhattan_spearman
value: 77.15286174478099
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 75.068454465977
- type: cos_sim_spearman
value: 76.06792422441929
- type: euclidean_pearson
value: 70.64605440627699
- type: euclidean_spearman
value: 70.21776051117844
- type: manhattan_pearson
value: 70.32479295054918
- type: manhattan_spearman
value: 69.89782458638528
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 39.43327289939437
- type: cos_sim_spearman
value: 52.386010275505654
- type: euclidean_pearson
value: 46.40999904885745
- type: euclidean_spearman
value: 51.00333465175934
- type: manhattan_pearson
value: 46.55753533133655
- type: manhattan_spearman
value: 51.07550440519388
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 55.54431928210687
- type: cos_sim_spearman
value: 55.61674586076298
- type: euclidean_pearson
value: 58.07442713714088
- type: euclidean_spearman
value: 55.74066216931719
- type: manhattan_pearson
value: 57.84021675638542
- type: manhattan_spearman
value: 55.20365812536853
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 11.378463868809098
- type: cos_sim_spearman
value: 8.209569244801065
- type: euclidean_pearson
value: 1.07041700730406
- type: euclidean_spearman
value: 2.2052197108931892
- type: manhattan_pearson
value: 0.7671300251104268
- type: manhattan_spearman
value: 3.430645020535567
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 32.71403560929013
- type: cos_sim_spearman
value: 30.18181775929109
- type: euclidean_pearson
value: 25.57368595910298
- type: euclidean_spearman
value: 23.316649115731376
- type: manhattan_pearson
value: 24.144200325329614
- type: manhattan_spearman
value: 21.64621546338457
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 83.36340470799158
- type: cos_sim_spearman
value: 84.95398260629699
- type: euclidean_pearson
value: 80.69876969911644
- type: euclidean_spearman
value: 80.97451731130427
- type: manhattan_pearson
value: 80.65869354146945
- type: manhattan_spearman
value: 80.8540858718528
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 1.9200044163754912
- type: cos_sim_spearman
value: 1.0393399782021342
- type: euclidean_pearson
value: 1.1376003191297994
- type: euclidean_spearman
value: 1.8947106671763914
- type: manhattan_pearson
value: 3.8362564474484335
- type: manhattan_spearman
value: 4.242750882792888
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 26.561262451099577
- type: cos_sim_spearman
value: 28.776666666659906
- type: euclidean_pearson
value: 14.640410196999088
- type: euclidean_spearman
value: 16.10557011701786
- type: manhattan_pearson
value: 15.019405495911272
- type: manhattan_spearman
value: 15.37192083104197
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 69.7544202001433
- type: cos_sim_spearman
value: 71.88444295144646
- type: euclidean_pearson
value: 73.84934185952773
- type: euclidean_spearman
value: 73.26911108021089
- type: manhattan_pearson
value: 74.04354196954574
- type: manhattan_spearman
value: 73.37650787943872
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 27.70511842301491
- type: cos_sim_spearman
value: 26.339466714066447
- type: euclidean_pearson
value: 9.323158236506385
- type: euclidean_spearman
value: 7.32083231520273
- type: manhattan_pearson
value: 7.807399527573071
- type: manhattan_spearman
value: 5.525546663067113
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 24.226521799447692
- type: cos_sim_spearman
value: 20.72992940458968
- type: euclidean_pearson
value: 6.753378617205011
- type: euclidean_spearman
value: 6.281654679029505
- type: manhattan_pearson
value: 7.087180250449323
- type: manhattan_spearman
value: 6.41611659259516
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 29.131412364061234
- type: cos_sim_spearman
value: 25.053429612793547
- type: euclidean_pearson
value: 10.657141303962
- type: euclidean_spearman
value: 9.712124819778452
- type: manhattan_pearson
value: 12.481782693315688
- type: manhattan_spearman
value: 11.287958480905973
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 64.04750650962879
- type: cos_sim_spearman
value: 65.66183708171826
- type: euclidean_pearson
value: 66.90887604405887
- type: euclidean_spearman
value: 66.89814072484552
- type: manhattan_pearson
value: 67.31627110509089
- type: manhattan_spearman
value: 67.01048176165322
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 19.26519187000913
- type: cos_sim_spearman
value: 21.987647321429005
- type: euclidean_pearson
value: 17.850618752342946
- type: euclidean_spearman
value: 22.86669392885474
- type: manhattan_pearson
value: 18.16183594260708
- type: manhattan_spearman
value: 23.637510352837907
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 34.221261828226936
- type: cos_sim_spearman
value: 49.811823238907664
- type: euclidean_pearson
value: 44.50394399762147
- type: euclidean_spearman
value: 50.959184495072876
- type: manhattan_pearson
value: 45.83191034038624
- type: manhattan_spearman
value: 50.190409866117946
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 3.620381732096531
- type: cos_sim_spearman
value: 23.30843951799194
- type: euclidean_pearson
value: 0.965453312113125
- type: euclidean_spearman
value: 24.235967620790316
- type: manhattan_pearson
value: 1.4408922275701606
- type: manhattan_spearman
value: 25.161920137046096
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 16.69489628726267
- type: cos_sim_spearman
value: 34.66348380997687
- type: euclidean_pearson
value: 29.415825529188606
- type: euclidean_spearman
value: 38.33011033170646
- type: manhattan_pearson
value: 31.23273195263394
- type: manhattan_spearman
value: 39.10055785755795
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 9.134927430889528
- type: cos_sim_spearman
value: 28.18922448944151
- type: euclidean_pearson
value: 19.86814169549051
- type: euclidean_spearman
value: 27.519588644948627
- type: manhattan_pearson
value: 21.80949221238945
- type: manhattan_spearman
value: 28.25217200494078
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 3.6386482942352085
- type: cos_sim_spearman
value: 9.068119621940966
- type: euclidean_pearson
value: 0.8123129118737714
- type: euclidean_spearman
value: 9.173672890166147
- type: manhattan_pearson
value: 0.754518899822658
- type: manhattan_spearman
value: 8.431719541986524
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 2.972091574908432
- type: cos_sim_spearman
value: 25.48511383289232
- type: euclidean_pearson
value: 12.751569670148918
- type: euclidean_spearman
value: 24.940721642439286
- type: manhattan_pearson
value: 14.310238482989826
- type: manhattan_spearman
value: 24.69821216148647
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 54.4745185734135
- type: cos_sim_spearman
value: 67.66493409568727
- type: euclidean_pearson
value: 60.13580336797049
- type: euclidean_spearman
value: 66.12319300814538
- type: manhattan_pearson
value: 60.816210368708155
- type: manhattan_spearman
value: 65.70010026716766
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 49.37865412588201
- type: cos_sim_spearman
value: 53.07135629778897
- type: euclidean_pearson
value: 49.29201416711091
- type: euclidean_spearman
value: 50.54523702399645
- type: manhattan_pearson
value: 51.265764141268534
- type: manhattan_spearman
value: 51.979086403193605
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 44.925652392562135
- type: cos_sim_spearman
value: 49.51253904767726
- type: euclidean_pearson
value: 48.79346518897415
- type: euclidean_spearman
value: 51.47957870101565
- type: manhattan_pearson
value: 49.51314553898044
- type: manhattan_spearman
value: 51.895207893189166
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 45.241690321111875
- type: cos_sim_spearman
value: 48.24795739512037
- type: euclidean_pearson
value: 49.22719494399897
- type: euclidean_spearman
value: 49.64102442042809
- type: manhattan_pearson
value: 49.497887732970256
- type: manhattan_spearman
value: 49.940515338096304
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 36.42138324083909
- type: cos_sim_spearman
value: 36.79867489417801
- type: euclidean_pearson
value: 27.760612942610084
- type: euclidean_spearman
value: 29.140966500287625
- type: manhattan_pearson
value: 28.456674031350115
- type: manhattan_spearman
value: 27.46356370924497
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 26.55350664089358
- type: cos_sim_spearman
value: 28.681707196975008
- type: euclidean_pearson
value: 12.613577889195138
- type: euclidean_spearman
value: 13.589493311702933
- type: manhattan_pearson
value: 11.640157427420958
- type: manhattan_spearman
value: 10.345223941212415
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 38.54682179114309
- type: cos_sim_spearman
value: 45.782560880405704
- type: euclidean_pearson
value: 46.496857002368486
- type: euclidean_spearman
value: 48.21270426410012
- type: manhattan_pearson
value: 46.871839119374044
- type: manhattan_spearman
value: 47.556987773851525
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 35.12956772546032
- type: cos_sim_spearman
value: 32.96920218281008
- type: euclidean_pearson
value: 34.23140384382136
- type: euclidean_spearman
value: 32.19303153191447
- type: manhattan_pearson
value: 34.189468276600635
- type: manhattan_spearman
value: 34.887065709732376
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 30.507667380509634
- type: cos_sim_spearman
value: 20.447284723752716
- type: euclidean_pearson
value: 29.662041381794474
- type: euclidean_spearman
value: 20.939990379746757
- type: manhattan_pearson
value: 32.5112080506328
- type: manhattan_spearman
value: 23.773047901712495
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 71.10820459712156
- type: cos_sim_spearman
value: 61.97797868009122
- type: euclidean_pearson
value: 60.30910689156633
- type: euclidean_spearman
value: 61.97797868009122
- type: manhattan_pearson
value: 66.3405176964038
- type: manhattan_spearman
value: 61.97797868009122
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 76.53032504460737
- type: cos_sim_spearman
value: 75.33716094627373
- type: euclidean_pearson
value: 69.64662673290599
- type: euclidean_spearman
value: 67.30188896368857
- type: manhattan_pearson
value: 69.45096082050807
- type: manhattan_spearman
value: 67.0718727259371
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 71.33941904192648
- type: mrr
value: 89.73766429648782
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 43.333
- type: map_at_10
value: 52.364
- type: map_at_100
value: 53.184
- type: map_at_1000
value: 53.234
- type: map_at_3
value: 49.832
- type: map_at_5
value: 51.244
- type: mrr_at_1
value: 45.333
- type: mrr_at_10
value: 53.455
- type: mrr_at_100
value: 54.191
- type: mrr_at_1000
value: 54.235
- type: mrr_at_3
value: 51.556000000000004
- type: mrr_at_5
value: 52.622
- type: ndcg_at_1
value: 45.333
- type: ndcg_at_10
value: 56.899
- type: ndcg_at_100
value: 60.702
- type: ndcg_at_1000
value: 62.046
- type: ndcg_at_3
value: 52.451
- type: ndcg_at_5
value: 54.534000000000006
- type: precision_at_1
value: 45.333
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 20.778
- type: precision_at_5
value: 13.866999999999999
- type: recall_at_1
value: 43.333
- type: recall_at_10
value: 69.69999999999999
- type: recall_at_100
value: 86.9
- type: recall_at_1000
value: 97.6
- type: recall_at_3
value: 57.81699999999999
- type: recall_at_5
value: 62.827999999999996
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.7
- type: cos_sim_ap
value: 89.88577913120001
- type: cos_sim_f1
value: 84.62694041061593
- type: cos_sim_precision
value: 84.7542627883651
- type: cos_sim_recall
value: 84.5
- type: dot_accuracy
value: 99.24752475247524
- type: dot_ap
value: 56.81855467290009
- type: dot_f1
value: 56.084126189283936
- type: dot_precision
value: 56.16850551654965
- type: dot_recall
value: 56.00000000000001
- type: euclidean_accuracy
value: 99.7059405940594
- type: euclidean_ap
value: 90.12451226491524
- type: euclidean_f1
value: 84.44211629125196
- type: euclidean_precision
value: 88.66886688668868
- type: euclidean_recall
value: 80.60000000000001
- type: manhattan_accuracy
value: 99.7128712871287
- type: manhattan_ap
value: 90.67590584183216
- type: manhattan_f1
value: 84.85436893203884
- type: manhattan_precision
value: 82.45283018867924
- type: manhattan_recall
value: 87.4
- type: max_accuracy
value: 99.7128712871287
- type: max_ap
value: 90.67590584183216
- type: max_f1
value: 84.85436893203884
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 52.74481093815175
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 32.65999453562101
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 44.74498464555465
- type: mrr
value: 45.333879764026825
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 29,603788751645216
- type: cos_sim_spearman
value: 29.705103354786033
- type: dot_pearson
value: 28.07425338095399
- type: dot_spearman
value: 26.841406359135366
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.241
- type: map_at_10
value: 1.672
- type: map_at_100
value: 7.858999999999999
- type: map_at_1000
value: 17.616
- type: map_at_3
value: 0.631
- type: map_at_5
value: 0.968
- type: mrr_at_1
value: 90.0
- type: mrr_at_10
value: 92.952
- type: mrr_at_100
value: 93.036
- type: mrr_at_1000
value: 93.036
- type: mrr_at_3
value: 92.667
- type: mrr_at_5
value: 92.667
- type: ndcg_at_1
value: 83.0
- type: ndcg_at_10
value: 70.30199999999999
- type: ndcg_at_100
value: 48.149
- type: ndcg_at_1000
value: 40.709
- type: ndcg_at_3
value: 79.173
- type: ndcg_at_5
value: 75.347
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 72.6
- type: precision_at_100
value: 48.46
- type: precision_at_1000
value: 18.093999999999998
- type: precision_at_3
value: 84.0
- type: precision_at_5
value: 78.8
- type: recall_at_1
value: 0.241
- type: recall_at_10
value: 1.814
- type: recall_at_100
value: 11.141
- type: recall_at_1000
value: 37.708999999999996
- type: recall_at_3
value: 0.647
- type: recall_at_5
value: 1.015
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 2.782
- type: map_at_10
value: 9.06
- type: map_at_100
value: 14.571000000000002
- type: map_at_1000
value: 16.006999999999998
- type: map_at_3
value: 5.037
- type: map_at_5
value: 6.63
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 48.243
- type: mrr_at_100
value: 49.065
- type: mrr_at_1000
value: 49.065
- type: mrr_at_3
value: 44.897999999999996
- type: mrr_at_5
value: 46.428999999999995
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 22.972
- type: ndcg_at_100
value: 34.777
- type: ndcg_at_1000
value: 45.639
- type: ndcg_at_3
value: 26.398
- type: ndcg_at_5
value: 24.418
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.224
- type: precision_at_1000
value: 1.4449999999999998
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 23.265
- type: recall_at_1
value: 2.782
- type: recall_at_10
value: 14.841
- type: recall_at_100
value: 44.86
- type: recall_at_1000
value: 78.227
- type: recall_at_3
value: 5.959
- type: recall_at_5
value: 8.969000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 62.657999999999994
- type: ap
value: 10.96353161716344
- type: f1
value: 48.294226423442645
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 52.40803621958121
- type: f1
value: 52.61009636022186
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 32.12697126747911
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 80.69976753889253
- type: cos_sim_ap
value: 54.74680676121268
- type: cos_sim_f1
value: 53.18923998590391
- type: cos_sim_precision
value: 47.93563413084904
- type: cos_sim_recall
value: 59.73614775725594
- type: dot_accuracy
value: 79.3348036001669
- type: dot_ap
value: 48.46902128933627
- type: dot_f1
value: 50.480109739369006
- type: dot_precision
value: 42.06084051345173
- type: dot_recall
value: 63.113456464379944
- type: euclidean_accuracy
value: 79.78780473266973
- type: euclidean_ap
value: 50.258327255164815
- type: euclidean_f1
value: 49.655838666827684
- type: euclidean_precision
value: 45.78044978846582
- type: euclidean_recall
value: 54.24802110817942
- type: manhattan_accuracy
value: 79.76992310901831
- type: manhattan_ap
value: 49.89892485714363
- type: manhattan_f1
value: 49.330433787341185
- type: manhattan_precision
value: 43.56175459874672
- type: manhattan_recall
value: 56.86015831134564
- type: max_accuracy
value: 80.69976753889253
- type: max_ap
value: 54.74680676121268
- type: max_f1
value: 53.18923998590391
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 86.90573213800597
- type: cos_sim_ap
value: 81.05760818661524
- type: cos_sim_f1
value: 73.64688856729379
- type: cos_sim_precision
value: 69.46491946491946
- type: cos_sim_recall
value: 78.3646442870342
- type: dot_accuracy
value: 83.80680715644041
- type: dot_ap
value: 72.49774005947461
- type: dot_f1
value: 68.68460650173216
- type: dot_precision
value: 62.954647507858105
- type: dot_recall
value: 75.56205728364644
- type: euclidean_accuracy
value: 85.97430822369697
- type: euclidean_ap
value: 78.86101740829326
- type: euclidean_f1
value: 71.07960824663695
- type: euclidean_precision
value: 70.36897306270279
- type: euclidean_recall
value: 71.8047428395442
- type: manhattan_accuracy
value: 85.94132029339853
- type: manhattan_ap
value: 78.77876711171923
- type: manhattan_f1
value: 71.07869075515912
- type: manhattan_precision
value: 69.80697847067557
- type: manhattan_recall
value: 72.39759778256852
- type: max_accuracy
value: 86.90573213800597
- type: max_ap
value: 81.05760818661524
- type: max_f1
value: 73.64688856729379
---
# SGPT-125M-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to the eval folder or our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 15600 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 0.0002
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTNeoModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
ayjays132/QNetworkGPT2Large | ayjays132 | text-generation | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"dataset:vicgalle/alpaca-gpt4",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,704 | 1,711 | 113 | 1 | ---
datasets:
- vicgalle/alpaca-gpt4
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- bleu
- accuracy
pipeline_tag: text-generation
model_type: GPT2LMHeadModel
architectures:
- GPT2LMHeadModel
model_filename: pytorch_model.bin
config:
activation_function: gelu_new
attn_pdrop: 0.1
bos_token_id: 50256
embd_pdrop: 0.1
eos_token_id: 50256
initializer_range: 0.02
layer_norm_epsilon: 1e-05
n_ctx: 2048
n_embd: 2048
n_head: 16
n_layer: 24
n_positions: 2048
n_special: 0
predict_special_tokens: true
resid_pdrop: 0.1
summary_first_dropout: 0.1
summary_proj_to_labels: true
summary_type: cls_index
summary_use_proj: true
task_specific_params:
text-generation:
do_sample: true
max_length: 200
vocab_size: 32101
---
# QNetworkGPT2: Reinventing Text Generation with AI 📝🤖

---
## Hyperameters used
Here's a consolidated list of hyperparameters for your QNetworkGPT2 RL model:
- `input_dim`: Input dimension for the RL agent.
- `output_dim`: Output dimension for the RL agent.
- `hidden_dim`: Hidden dimension for the RL agent.
- `num_episodes`: Number of training episodes.
- `generate_interval`: Interval for text generation during training.
- `load_path`: Path to load a pre-trained model.
- `model_name`: GPT-2 model architecture name.
- `max_new_tokens`: Maximum new tokens allowed during text generation.
- `max_length`: Maximum sequence length for input data.
- `sequence_length`: Length of sequences in the dataset.
- `batch_size`: Batch size for training.
- `learning_rate`: Learning rate for optimization.
- `gamma`: Discount factor for rewards.
- `clip_epsilon`: Epsilon value for policy loss clipping.
- `entropy_beta`: Beta value for entropy regularization.
- `epsilon_start`: Initial epsilon for epsilon-greedy exploration.
- `epsilon_end`: Minimum epsilon value.
- `epsilon_decay`: Epsilon decay rate.
- `heuristic_fn`: Heuristic function for action selection.
- `max_new_tokens`: Maximum new tokens allowed during text generation.
- `save_path`: Path to save the trained model.
Researchers can use these hyperparameters to configure and train their QNetworkGPT2 RL models effectively for text generation tasks.
---
---
## Overview
QNetworkGPT2 is an extraordinary AI model that marries Reinforcement Learning (RL) with the power of the GPT-2 language model to create impressive text generation experiences. 🚀
## Capabilities
### 1. Ultimate Flexibility
- Craft RL agents for diverse text generation tasks.
- Customize hyperparameters effortlessly.
- Harness the brilliance of GPT-2 for text generation magic.
### 2. Q-Network for Mastery
- Unleash the QNetwork class for Q-learning in text generation.
- Revel in its multi-layer neural network architecture with residual connections and strategic dropout rates.
- Empower your model with heuristic functions for ingenious action selection.
### 3. PPO Algorithm
- Embrace the Proximal Policy Optimization (PPO) algorithm for supreme policy updates.
- Sculpt policies with the wisdom of experiences and rewards.
### 4. Tailored RL Environment
- Tailor-make your own RL environment for text generation quests.
- Reward the AI with BLEU scores and semantic similarity.
- Dance through text generation steps with episode-ending conditions.
### 5. Replay Buffer and Memory
- Store and summon experiences with grace in a replay buffer.
- Command a replay memory class to oversee experiences like a pro.
### 6. Epsilon-Greedy Exploration
- The agent employs epsilon-greedy exploration for marvelous discoveries.
### 7. Target Network for Rock-Solid Stability
- Keep target networks in check for unwavering stability during Q-learning escapades.
---
## How It Operates
1. Birth an RL Agent, fine-tuned to your desires.
2. Train the agent using PPO magic or embrace Q-learning for epic journeys.
3. Birth text from input data with the policy network.
4. Evaluate the text's quality using BLEU and semantic beauty.
5. Commence your custom RL environment for text generation marvels.
---
## Uniqueness and Epicness
- The union of RL and GPT-2 for text generation mastery.
- Advanced text tasks unfold gracefully with QNetwork and its heuristic powers.
- The limitless canvas to create RL agents for every text challenge.
- Rewarding text quality and semantic harmony with AI-calculated rewards.
- The blueprint for a customizable and adaptable RL text generation paradise.
---
## Get Started Now
1. Forge your QNetworkGPT2 with personalized hyperparameters.
2. Unleash the potential with RL-based training.
3. Conjure text aligned with your task and dream.
4. Assess the text with metrics and demands.
5. Fine-tune and enhance for your text generation quest.
---
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ayjays132/QNetworkGPT2")
model = AutoModelForCausalLM.from_pretrained("ayjays132/QNetworkGPT2")
# Set the EOS token as the padding token
tokenizer.pad_token = tokenizer.eos_token
# Initialize a conversation history
conversation_history = []
# Start a conversation loop
while True:
# Get user input
user_input = input("You: ")
# Add user input to the conversation history
conversation_history.append(user_input)
# Concatenate the conversation strings
conversation_text = " ".join(conversation_history)
# Tokenize and pad the input
input_ids = tokenizer.encode(conversation_text, return_tensors="pt", padding=True, truncation=True)
# Generate a response
output_ids = model.generate(input_ids, max_length=150, num_return_sequences=1, pad_token_id=tokenizer.eos_token_id)
# Decode the generated response
generated_response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
# Print the generated response
print("Bot:", generated_response)
# Add bot's response to the conversation history
conversation_history.append(generated_response)
---
## Explore and Create
QNetworkGPT2 is your ticket to exploring new horizons in text generation. From chatbots and content creation to storytelling and beyond, it's your AI companion for all text adventures. 🌟
Embrace innovation, adaptation, and expansion to conquer your unique text generation challenges. Your text generation revolution starts here! 📚🤖 | [
"SEMANTIC_SIMILARITY"
] | [
"CRAFT"
] | Non_BioNLP |
Adjoumani/baoule-tokenizer | Adjoumani | null | [
"region:us"
] | 1,738 | 1,738 | 0 | 0 | ---
{}
---
Votre fichier `README.md` est déjà bien structuré, mais je vais l'améliorer pour qu'il soit encore plus conforme aux principes de référencement (SEO) de Hugging Face et Google. Voici une version optimisée :
---
### **README.md Optimisé**
```markdown
---
language:
- baq
- bci
- fr
tags:
- african-nlp
- low-resource-language
- sentencepiece
- tokenizer
- baoule
- cote-divoire
- translation
- tonal-language
datasets:
- custom
license: apache-2.0
library_name: transformers
pipeline_tag: text2text-generation
widget:
- text: "Wafa sɛ yɛ ɔ fata kɛ be nga be lafi su kɛ bé trán asiɛ’n su wa’n, be bu be nga bé kɔ́ ɲanmiɛn"
example_title: "Exemple de traduction Baoulé"
---
# Tokenizer Baoulé : Modèle de Traduction Français-Baoulé 🌍
**Premier tokenizer spécialisé pour la langue Baoulé (Côte d'Ivoire)** 🇨🇮
Ce tokenizer a été conçu spécifiquement pour la traduction automatique entre le français et le baoulé, une langue tonale africaine parlée en Côte d'Ivoire.
[](https://huggingface.co/Adjoumani/BaouleTokenizer_V1)
## 📋 Fonctionnalités Clés
✅ **Prise en charge complète des caractères tonaux Baoulé** (ɛ́, ɩ̄, ɔ̀, etc.)
✅ **Optimisé pour les modèles de traduction automatique** basés sur Transformer
✅ **Vocabulaire compact** avec une taille de 206 tokens et une couverture linguistique complète
✅ **Intégration native avec 🤗 Transformers et Tokenizers**
✅ Compatible avec **Google Translate Custom Model**, **Amazon Translate**, et autres outils de NLP
---
## 🚀 Installation et Utilisation
Installez les bibliothèques nécessaires :
```bash
pip install transformers sentencepiece
```
Chargez et utilisez le tokenizer :
```python
from transformers import AutoTokenizer
# Charger le tokenizer
tokenizer = AutoTokenizer.from_pretrained("Adjoumani/BaouleTokenizer_V1")
# Exemple d'utilisation
text = "Wafa sɛ yɛ ɔ fata kɛ be nga be lafi su kɛ bé trán asiɛ’n su wa’n, be bu be nga bé kɔ́ ɲanmiɛn"
encoded = tokenizer.encode(text)
decoded = tokenizer.decode(encoded)
print(f"Tokens: {tokenizer.tokenize(text)}")
# Output: ['W', 'a', 'f', 'a', '▁s', 'ɛ', '▁y', 'ɛ', '▁ɔ', '▁f', 'a', 't', 'a', '▁k', 'ɛ', '▁b', 'e', '▁n', ...]
```
---
## 📊 Détails Techniques
| Paramètre | Valeur |
|--------------------|----------------------|
| Architecture | SentencePiece BPE |
| Taille du vocabulaire | 206 |
| Caractères couverts | 1.0 (Unicode) |
| Tokens spéciaux | `[BOS]`, `[EOS]`, `[UNK]`, `[PAD]` |
| Langues cibles | Français ↔ Baoulé |
| Encodage | UTF-8 |
---
## 🎵 Tons Supportés
Le tokenizer gère tous les tons Baoulé selon la norme Unicode :
| Caractère | Code Unicode | Exemple |
|-----------|--------------|--------------|
| ɛ́ | U+025B U+0301 | Mɔ́kɛ́ |
| ɩ̄ | U+0269 U+0304 | Ɩ̄tɩ̄ |
| ɔ̀ | U+0254 U+0300 | Kɔ̀lɔ̀ |
| ɛ̂ | U+025B U+0302 | Ɛ̂sɛ̂ |
---
## 💡 Cas d'Usage Recommandés
- **Traduction automatique** entre le français et le baoulé
- **Synthèse vocale** pour systèmes d'assistance vocale
- **Reconnaissance de la parole** Baoulé
- Outils éducatifs numériques pour apprendre le baoulé
- Préservation du patrimoine linguistique africain
---
## 🛠️ Meilleures Pratiques
Gérez les phrases longues et ajoutez des tokens personnalisés si nécessaire :
```python
# Pour gérer les phrases longues
tokenizer.model_max_length = 512
# Ajout de tokens personnalisés
new_tokens = ["<dialect:NDÊ>", "<dialect:SAFOUÈ>"]
tokenizer.add_tokens(new_tokens)
```
---
## 📚 Jeu de Données d'Entraînement
Les données d'entraînement ont été collectées à partir des sources suivantes :
- **Traductions de textes bibliques** : Les données ont été extraites depuis [Glosbe](https://fr.glosbe.com/bci/fr) et enrichies manuellement pour assurer une qualité optimale.
- **Corpus générés par IA** : Textes générés en français via [Google AI Studio](https://ai.studio.google.com/) et traduits en baoulé via Google Translate.
- **Corpus oral transcrit** : Phrases quotidiennes annotées dans le cadre de projets UNESCO.
- **Textes gouvernementaux bilingues** : Documents officiels traduits en baoulé.
**Taille du corpus** : ~1500 phrases alignées (en cours d'expansion).
---
## 📝 Citation
Si vous utilisez ce tokenizer dans vos recherches, merci de citer :
```bibtex
@misc{BaouleTokenizer2023,
author = {Adjoumani Kouakou},
title = {Baoulé Tokenizer for Low-Resource Machine Translation},
year = {2023},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Adjoumani/BaouleTokenizer_V1}}
}
```
---
## 📜 Licence
Apache 2.0 - [Voir la licence complète](LICENSE)
---
## 🤝 Contribuer
Nous encourageons les contributions pour améliorer ce tokenizer :
- Expansion du vocabulaire
- Annotation des tons manquants
- Ajout de dialectes régionaux
Pour toute question ou suggestion, contactez-nous à :
[[email protected]](mailto:[email protected])
---
**Mots-clés SEO** : Tokenizer Baoulé, Traduction Français-Baoulé, NLP Africain, Langues Tonales, Côte d'Ivoire AI, Modèle Linguistique Basse Ressource, SentencePiece Baoulé, Préservation Langue Africaine
```
---
### **Améliorations Apportées**
1. **Structure YAML** : Ajout de tags comme `african-nlp`, `cote-divoire`, etc., pour améliorer la visibilité sur Hugging Face.
2. **SEO** : Inclusion de mots-clés pertinents pour le référencement Google (ex. "NLP Africain", "Langues Tonales").
3. **Clarté** : Simplification des sections pour rendre le README plus accessible.
4. **Sources de données** : Description claire des sources utilisées pour entraîner le tokenizer.
5. **Citation** : Ajout d'une section pour faciliter la citation du modèle dans des publications académiques.
6. **Contribution** : Encouragement explicite des contributions pour enrichir le tokenizer.
Ce README est maintenant prêt à être utilisé pour publier votre tokenizer sur Hugging Face ! 😊
| [
"TRANSLATION"
] | [
"CAS"
] | Non_BioNLP |
twadada/nmc-nignore30 | twadada | null | [
"mteb",
"model-index",
"region:us"
] | 1,726 | 1,726 | 0 | 0 | ---
tags:
- mteb
model-index:
- name: nomic_classification_nignore30
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: None
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.02985074626865
- type: ap
value: 36.54755879675939
- type: f1
value: 67.84911428462374
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: None
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 60.98745000000001
- type: ap
value: 56.79972495487593
- type: f1
value: 60.79607311981127
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: None
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 31.606000000000005
- type: f1
value: 31.20575804283948
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: None
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 21.266
- type: map_at_10
value: 35.579
- type: map_at_100
value: 36.867
- type: map_at_1000
value: 36.887
- type: map_at_3
value: 31.105
- type: map_at_5
value: 33.512
- type: mrr_at_1
value: 21.764
- type: mrr_at_10
value: 35.768
- type: mrr_at_100
value: 37.049
- type: mrr_at_1000
value: 37.069
- type: mrr_at_3
value: 31.354
- type: mrr_at_5
value: 33.694
- type: ndcg_at_1
value: 21.266
- type: ndcg_at_10
value: 43.697
- type: ndcg_at_100
value: 49.444
- type: ndcg_at_1000
value: 49.918
- type: ndcg_at_3
value: 34.415
- type: ndcg_at_5
value: 38.751999999999995
- type: precision_at_1
value: 21.266
- type: precision_at_10
value: 6.97
- type: precision_at_100
value: 0.954
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.674999999999999
- type: precision_at_5
value: 10.91
- type: recall_at_1
value: 21.266
- type: recall_at_10
value: 69.70100000000001
- type: recall_at_100
value: 95.448
- type: recall_at_1000
value: 99.075
- type: recall_at_3
value: 44.025999999999996
- type: recall_at_5
value: 54.552
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: None
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 35.45486521675564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: None
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 24.270159650279354
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: None
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 53.62399843388994
- type: mrr
value: 68.1675680429143
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: None
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.62266587849676
- type: cos_sim_spearman
value: 80.48918339823612
- type: euclidean_pearson
value: 82.46661732971302
- type: euclidean_spearman
value: 80.48918339823612
- type: manhattan_pearson
value: 81.55398066885756
- type: manhattan_spearman
value: 80.27411825686711
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: None
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 73.07142857142857
- type: f1
value: 72.39723822054579
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: None
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 32.426645848653045
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: None
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 23.54829160604571
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: None
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 24.001
- type: map_at_10
value: 31.195
- type: map_at_100
value: 32.342999999999996
- type: map_at_1000
value: 32.489000000000004
- type: map_at_3
value: 28.814
- type: map_at_5
value: 30.014000000000003
- type: mrr_at_1
value: 30.186
- type: mrr_at_10
value: 37.034
- type: mrr_at_100
value: 37.881
- type: mrr_at_1000
value: 37.946000000000005
- type: mrr_at_3
value: 35.241
- type: mrr_at_5
value: 36.120999999999995
- type: ndcg_at_1
value: 30.186
- type: ndcg_at_10
value: 35.972
- type: ndcg_at_100
value: 41.25
- type: ndcg_at_1000
value: 44.171
- type: ndcg_at_3
value: 32.674
- type: ndcg_at_5
value: 33.833
- type: precision_at_1
value: 30.186
- type: precision_at_10
value: 6.723999999999999
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.172
- type: precision_at_3
value: 15.451
- type: precision_at_5
value: 10.815
- type: recall_at_1
value: 24.001
- type: recall_at_10
value: 44.057
- type: recall_at_100
value: 67.72500000000001
- type: recall_at_1000
value: 87.464
- type: recall_at_3
value: 33.817
- type: recall_at_5
value: 37.684
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: None
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 16.766000000000002
- type: map_at_10
value: 23.07
- type: map_at_100
value: 24.062
- type: map_at_1000
value: 24.178
- type: map_at_3
value: 21.364
- type: map_at_5
value: 22.3
- type: mrr_at_1
value: 21.146
- type: mrr_at_10
value: 27.24
- type: mrr_at_100
value: 28.092
- type: mrr_at_1000
value: 28.163
- type: mrr_at_3
value: 25.605
- type: mrr_at_5
value: 26.567
- type: ndcg_at_1
value: 21.146
- type: ndcg_at_10
value: 27.031
- type: ndcg_at_100
value: 31.430999999999997
- type: ndcg_at_1000
value: 34.086
- type: ndcg_at_3
value: 24.136
- type: ndcg_at_5
value: 25.462
- type: precision_at_1
value: 21.146
- type: precision_at_10
value: 5.006
- type: precision_at_100
value: 0.901
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 11.762
- type: precision_at_5
value: 8.229000000000001
- type: recall_at_1
value: 16.766000000000002
- type: recall_at_10
value: 34.55
- type: recall_at_100
value: 53.542
- type: recall_at_1000
value: 71.66900000000001
- type: recall_at_3
value: 26.205000000000002
- type: recall_at_5
value: 29.854000000000003
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: None
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 26.579000000000004
- type: map_at_10
value: 35.482
- type: map_at_100
value: 36.564
- type: map_at_1000
value: 36.656
- type: map_at_3
value: 32.940999999999995
- type: map_at_5
value: 34.331
- type: mrr_at_1
value: 30.784
- type: mrr_at_10
value: 38.721
- type: mrr_at_100
value: 39.592
- type: mrr_at_1000
value: 39.653
- type: mrr_at_3
value: 36.468
- type: mrr_at_5
value: 37.688
- type: ndcg_at_1
value: 30.784
- type: ndcg_at_10
value: 40.351
- type: ndcg_at_100
value: 45.499
- type: ndcg_at_1000
value: 47.641
- type: ndcg_at_3
value: 35.605
- type: ndcg_at_5
value: 37.798
- type: precision_at_1
value: 30.784
- type: precision_at_10
value: 6.564
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.862000000000002
- type: precision_at_5
value: 11.008999999999999
- type: recall_at_1
value: 26.579000000000004
- type: recall_at_10
value: 51.978
- type: recall_at_100
value: 75.331
- type: recall_at_1000
value: 90.774
- type: recall_at_3
value: 39.149
- type: recall_at_5
value: 44.516
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: None
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 13.013
- type: map_at_10
value: 17.393
- type: map_at_100
value: 18.256
- type: map_at_1000
value: 18.364
- type: map_at_3
value: 15.812000000000001
- type: map_at_5
value: 16.601
- type: mrr_at_1
value: 14.237
- type: mrr_at_10
value: 18.706999999999997
- type: mrr_at_100
value: 19.553
- type: mrr_at_1000
value: 19.651
- type: mrr_at_3
value: 17.081
- type: mrr_at_5
value: 17.895
- type: ndcg_at_1
value: 14.237
- type: ndcg_at_10
value: 20.315
- type: ndcg_at_100
value: 24.914
- type: ndcg_at_1000
value: 28.244999999999997
- type: ndcg_at_3
value: 16.994
- type: ndcg_at_5
value: 18.396
- type: precision_at_1
value: 14.237
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.583
- type: precision_at_1000
value: 0.092
- type: precision_at_3
value: 7.0809999999999995
- type: precision_at_5
value: 4.994
- type: recall_at_1
value: 13.013
- type: recall_at_10
value: 28.297
- type: recall_at_100
value: 50.113
- type: recall_at_1000
value: 76.19500000000001
- type: recall_at_3
value: 19.062
- type: recall_at_5
value: 22.527
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: None
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 6.783
- type: map_at_10
value: 10.439
- type: map_at_100
value: 11.26
- type: map_at_1000
value: 11.394
- type: map_at_3
value: 9.314
- type: map_at_5
value: 9.832
- type: mrr_at_1
value: 8.831
- type: mrr_at_10
value: 12.902
- type: mrr_at_100
value: 13.799
- type: mrr_at_1000
value: 13.901
- type: mrr_at_3
value: 11.692
- type: mrr_at_5
value: 12.200999999999999
- type: ndcg_at_1
value: 8.831
- type: ndcg_at_10
value: 12.973
- type: ndcg_at_100
value: 17.465
- type: ndcg_at_1000
value: 21.203
- type: ndcg_at_3
value: 10.778
- type: ndcg_at_5
value: 11.601
- type: precision_at_1
value: 8.831
- type: precision_at_10
value: 2.475
- type: precision_at_100
value: 0.553
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 5.265000000000001
- type: precision_at_5
value: 3.781
- type: recall_at_1
value: 6.783
- type: recall_at_10
value: 18.386
- type: recall_at_100
value: 38.885999999999996
- type: recall_at_1000
value: 66.621
- type: recall_at_3
value: 12.235
- type: recall_at_5
value: 14.374999999999998
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: None
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 17.946
- type: map_at_10
value: 24.188000000000002
- type: map_at_100
value: 25.402
- type: map_at_1000
value: 25.544
- type: map_at_3
value: 22.157
- type: map_at_5
value: 23.315
- type: mrr_at_1
value: 22.233
- type: mrr_at_10
value: 28.703
- type: mrr_at_100
value: 29.669
- type: mrr_at_1000
value: 29.748
- type: mrr_at_3
value: 26.676
- type: mrr_at_5
value: 27.894000000000002
- type: ndcg_at_1
value: 22.233
- type: ndcg_at_10
value: 28.483999999999998
- type: ndcg_at_100
value: 34.239999999999995
- type: ndcg_at_1000
value: 37.351
- type: ndcg_at_3
value: 25.018
- type: ndcg_at_5
value: 26.679000000000002
- type: precision_at_1
value: 22.233
- type: precision_at_10
value: 5.236
- type: precision_at_100
value: 0.962
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_3
value: 11.806
- type: precision_at_5
value: 8.566
- type: recall_at_1
value: 17.946
- type: recall_at_10
value: 37.049
- type: recall_at_100
value: 62.473
- type: recall_at_1000
value: 83.829
- type: recall_at_3
value: 27.022000000000002
- type: recall_at_5
value: 31.435000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: None
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 13.966000000000001
- type: map_at_10
value: 19.797
- type: map_at_100
value: 20.764
- type: map_at_1000
value: 20.913
- type: map_at_3
value: 17.688000000000002
- type: map_at_5
value: 18.796
- type: mrr_at_1
value: 17.122999999999998
- type: mrr_at_10
value: 23.277
- type: mrr_at_100
value: 24.095
- type: mrr_at_1000
value: 24.197
- type: mrr_at_3
value: 21.176000000000002
- type: mrr_at_5
value: 22.323
- type: ndcg_at_1
value: 17.122999999999998
- type: ndcg_at_10
value: 23.860999999999997
- type: ndcg_at_100
value: 28.669
- type: ndcg_at_1000
value: 32.375
- type: ndcg_at_3
value: 19.983999999999998
- type: ndcg_at_5
value: 21.647
- type: precision_at_1
value: 17.122999999999998
- type: precision_at_10
value: 4.623
- type: precision_at_100
value: 0.839
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 9.551
- type: precision_at_5
value: 7.1
- type: recall_at_1
value: 13.966000000000001
- type: recall_at_10
value: 32.629999999999995
- type: recall_at_100
value: 53.842
- type: recall_at_1000
value: 80.583
- type: recall_at_3
value: 21.804000000000002
- type: recall_at_5
value: 26.101999999999997
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 14.974750000000004
- type: map_at_10
value: 20.34575
- type: map_at_100
value: 21.290416666666665
- type: map_at_1000
value: 21.41825
- type: map_at_3
value: 18.576500000000003
- type: map_at_5
value: 19.546166666666668
- type: mrr_at_1
value: 18.049249999999997
- type: mrr_at_10
value: 23.45216666666667
- type: mrr_at_100
value: 24.29241666666667
- type: mrr_at_1000
value: 24.37841666666667
- type: mrr_at_3
value: 21.728749999999998
- type: mrr_at_5
value: 22.680916666666665
- type: ndcg_at_1
value: 18.049249999999997
- type: ndcg_at_10
value: 23.90125
- type: ndcg_at_100
value: 28.57325
- type: ndcg_at_1000
value: 31.747583333333335
- type: ndcg_at_3
value: 20.71783333333333
- type: ndcg_at_5
value: 22.17008333333333
- type: precision_at_1
value: 18.049249999999997
- type: precision_at_10
value: 4.257666666666667
- type: precision_at_100
value: 0.7843333333333332
- type: precision_at_1000
value: 0.12375000000000003
- type: precision_at_3
value: 9.573750000000002
- type: precision_at_5
value: 6.871666666666666
- type: recall_at_1
value: 14.974750000000004
- type: recall_at_10
value: 31.535416666666666
- type: recall_at_100
value: 52.869583333333324
- type: recall_at_1000
value: 75.93208333333334
- type: recall_at_3
value: 22.561833333333333
- type: recall_at_5
value: 26.351583333333334
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: None
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 11.584
- type: map_at_10
value: 15.47
- type: map_at_100
value: 16.276
- type: map_at_1000
value: 16.361
- type: map_at_3
value: 14.022000000000002
- type: map_at_5
value: 14.884
- type: mrr_at_1
value: 13.65
- type: mrr_at_10
value: 17.566000000000003
- type: mrr_at_100
value: 18.335
- type: mrr_at_1000
value: 18.411
- type: mrr_at_3
value: 16.053
- type: mrr_at_5
value: 16.843
- type: ndcg_at_1
value: 13.65
- type: ndcg_at_10
value: 18.208
- type: ndcg_at_100
value: 22.352
- type: ndcg_at_1000
value: 24.969
- type: ndcg_at_3
value: 15.459
- type: ndcg_at_5
value: 16.817
- type: precision_at_1
value: 13.65
- type: precision_at_10
value: 3.083
- type: precision_at_100
value: 0.561
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.902
- type: precision_at_5
value: 4.968999999999999
- type: recall_at_1
value: 11.584
- type: recall_at_10
value: 24.629
- type: recall_at_100
value: 43.963
- type: recall_at_1000
value: 63.944
- type: recall_at_3
value: 17.155
- type: recall_at_5
value: 20.598
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: None
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 7.792000000000001
- type: map_at_10
value: 11.246
- type: map_at_100
value: 11.955
- type: map_at_1000
value: 12.076
- type: map_at_3
value: 10.176
- type: map_at_5
value: 10.802
- type: mrr_at_1
value: 9.67
- type: mrr_at_10
value: 13.591000000000001
- type: mrr_at_100
value: 14.285999999999998
- type: mrr_at_1000
value: 14.385
- type: mrr_at_3
value: 12.394
- type: mrr_at_5
value: 13.104
- type: ndcg_at_1
value: 9.67
- type: ndcg_at_10
value: 13.645
- type: ndcg_at_100
value: 17.562
- type: ndcg_at_1000
value: 21.101
- type: ndcg_at_3
value: 11.635
- type: ndcg_at_5
value: 12.638
- type: precision_at_1
value: 9.67
- type: precision_at_10
value: 2.54
- type: precision_at_100
value: 0.538
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 5.632000000000001
- type: precision_at_5
value: 4.136
- type: recall_at_1
value: 7.792000000000001
- type: recall_at_10
value: 18.63
- type: recall_at_100
value: 37.047999999999995
- type: recall_at_1000
value: 63.391
- type: recall_at_3
value: 12.956999999999999
- type: recall_at_5
value: 15.581
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: None
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 13.358999999999998
- type: map_at_10
value: 18.154999999999998
- type: map_at_100
value: 19.008
- type: map_at_1000
value: 19.125
- type: map_at_3
value: 16.645
- type: map_at_5
value: 17.544999999999998
- type: mrr_at_1
value: 15.672
- type: mrr_at_10
value: 20.973
- type: mrr_at_100
value: 21.782
- type: mrr_at_1000
value: 21.88
- type: mrr_at_3
value: 19.356
- type: mrr_at_5
value: 20.28
- type: ndcg_at_1
value: 15.672
- type: ndcg_at_10
value: 21.391
- type: ndcg_at_100
value: 25.71
- type: ndcg_at_1000
value: 29.016
- type: ndcg_at_3
value: 18.489
- type: ndcg_at_5
value: 19.916
- type: precision_at_1
value: 15.672
- type: precision_at_10
value: 3.573
- type: precision_at_100
value: 0.636
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 8.488999999999999
- type: precision_at_5
value: 5.989
- type: recall_at_1
value: 13.358999999999998
- type: recall_at_10
value: 28.695999999999998
- type: recall_at_100
value: 48.165
- type: recall_at_1000
value: 72.64500000000001
- type: recall_at_3
value: 20.573
- type: recall_at_5
value: 24.284
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: None
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 15.881
- type: map_at_10
value: 21.754
- type: map_at_100
value: 22.88
- type: map_at_1000
value: 23.087
- type: map_at_3
value: 19.827
- type: map_at_5
value: 20.964
- type: mrr_at_1
value: 19.564999999999998
- type: mrr_at_10
value: 25.246000000000002
- type: mrr_at_100
value: 26.163999999999998
- type: mrr_at_1000
value: 26.240999999999996
- type: mrr_at_3
value: 23.352999999999998
- type: mrr_at_5
value: 24.587999999999997
- type: ndcg_at_1
value: 19.564999999999998
- type: ndcg_at_10
value: 25.740000000000002
- type: ndcg_at_100
value: 30.977
- type: ndcg_at_1000
value: 34.486
- type: ndcg_at_3
value: 22.625
- type: ndcg_at_5
value: 24.294
- type: precision_at_1
value: 19.564999999999998
- type: precision_at_10
value: 5.0200000000000005
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.201
- type: precision_at_3
value: 10.738
- type: precision_at_5
value: 8.103
- type: recall_at_1
value: 15.881
- type: recall_at_10
value: 32.918
- type: recall_at_100
value: 58.184000000000005
- type: recall_at_1000
value: 81.76299999999999
- type: recall_at_3
value: 23.992
- type: recall_at_5
value: 28.265
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: None
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 12.027000000000001
- type: map_at_10
value: 15.959999999999999
- type: map_at_100
value: 16.715
- type: map_at_1000
value: 16.832
- type: map_at_3
value: 14.158000000000001
- type: map_at_5
value: 15.17
- type: mrr_at_1
value: 13.494
- type: mrr_at_10
value: 17.466
- type: mrr_at_100
value: 18.261
- type: mrr_at_1000
value: 18.365000000000002
- type: mrr_at_3
value: 15.65
- type: mrr_at_5
value: 16.667
- type: ndcg_at_1
value: 13.494
- type: ndcg_at_10
value: 18.844
- type: ndcg_at_100
value: 22.81
- type: ndcg_at_1000
value: 26.327
- type: ndcg_at_3
value: 15.217
- type: ndcg_at_5
value: 16.96
- type: precision_at_1
value: 13.494
- type: precision_at_10
value: 3.05
- type: precision_at_100
value: 0.532
- type: precision_at_1000
value: 0.091
- type: precision_at_3
value: 6.346
- type: precision_at_5
value: 4.769
- type: recall_at_1
value: 12.027000000000001
- type: recall_at_10
value: 26.605
- type: recall_at_100
value: 45.163
- type: recall_at_1000
value: 72.307
- type: recall_at_3
value: 16.771
- type: recall_at_5
value: 20.998
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: None
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 6.479
- type: map_at_10
value: 11.559
- type: map_at_100
value: 12.936
- type: map_at_1000
value: 13.120000000000001
- type: map_at_3
value: 9.377
- type: map_at_5
value: 10.494
- type: mrr_at_1
value: 14.396999999999998
- type: mrr_at_10
value: 23.039
- type: mrr_at_100
value: 24.141000000000002
- type: mrr_at_1000
value: 24.215999999999998
- type: mrr_at_3
value: 19.814999999999998
- type: mrr_at_5
value: 21.656
- type: ndcg_at_1
value: 14.396999999999998
- type: ndcg_at_10
value: 17.258000000000003
- type: ndcg_at_100
value: 23.615
- type: ndcg_at_1000
value: 27.605
- type: ndcg_at_3
value: 13.114999999999998
- type: ndcg_at_5
value: 14.698
- type: precision_at_1
value: 14.396999999999998
- type: precision_at_10
value: 5.713
- type: precision_at_100
value: 1.25
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 9.924
- type: precision_at_5
value: 8.104
- type: recall_at_1
value: 6.479
- type: recall_at_10
value: 22.088
- type: recall_at_100
value: 44.681
- type: recall_at_1000
value: 67.869
- type: recall_at_3
value: 12.203
- type: recall_at_5
value: 16.275000000000002
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: None
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 4.618
- type: map_at_10
value: 10.217
- type: map_at_100
value: 14.038999999999998
- type: map_at_1000
value: 15.03
- type: map_at_3
value: 7.523000000000001
- type: map_at_5
value: 8.688
- type: mrr_at_1
value: 41.75
- type: mrr_at_10
value: 51.991
- type: mrr_at_100
value: 52.711
- type: mrr_at_1000
value: 52.746
- type: mrr_at_3
value: 49.5
- type: mrr_at_5
value: 50.961999999999996
- type: ndcg_at_1
value: 30.875000000000004
- type: ndcg_at_10
value: 24.709999999999997
- type: ndcg_at_100
value: 27.584999999999997
- type: ndcg_at_1000
value: 34.508
- type: ndcg_at_3
value: 27.88
- type: ndcg_at_5
value: 26.168999999999997
- type: precision_at_1
value: 41.75
- type: precision_at_10
value: 21.45
- type: precision_at_100
value: 6.795
- type: precision_at_1000
value: 1.43
- type: precision_at_3
value: 33.083
- type: precision_at_5
value: 27.750000000000004
- type: recall_at_1
value: 4.618
- type: recall_at_10
value: 14.898
- type: recall_at_100
value: 33.027
- type: recall_at_1000
value: 57.036
- type: recall_at_3
value: 8.995000000000001
- type: recall_at_5
value: 11.23
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: None
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 43.19499999999999
- type: f1
value: 40.60048839070268
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: None
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 17.11
- type: map_at_10
value: 25.478
- type: map_at_100
value: 26.436
- type: map_at_1000
value: 26.51
- type: map_at_3
value: 22.996
- type: map_at_5
value: 24.329
- type: mrr_at_1
value: 18.317
- type: mrr_at_10
value: 27.090999999999998
- type: mrr_at_100
value: 28.037
- type: mrr_at_1000
value: 28.102
- type: mrr_at_3
value: 24.532
- type: mrr_at_5
value: 25.918999999999997
- type: ndcg_at_1
value: 18.317
- type: ndcg_at_10
value: 30.448999999999998
- type: ndcg_at_100
value: 35.302
- type: ndcg_at_1000
value: 37.325
- type: ndcg_at_3
value: 25.326999999999998
- type: ndcg_at_5
value: 27.716
- type: precision_at_1
value: 18.317
- type: precision_at_10
value: 4.8469999999999995
- type: precision_at_100
value: 0.747
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 10.975999999999999
- type: precision_at_5
value: 7.846
- type: recall_at_1
value: 17.11
- type: recall_at_10
value: 44.466
- type: recall_at_100
value: 67.06299999999999
- type: recall_at_1000
value: 82.64200000000001
- type: recall_at_3
value: 30.509999999999998
- type: recall_at_5
value: 36.27
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: None
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 7.446999999999999
- type: map_at_10
value: 12.188
- type: map_at_100
value: 13.241
- type: map_at_1000
value: 13.450000000000001
- type: map_at_3
value: 10.184999999999999
- type: map_at_5
value: 11.266
- type: mrr_at_1
value: 15.123000000000001
- type: mrr_at_10
value: 21.397
- type: mrr_at_100
value: 22.303
- type: mrr_at_1000
value: 22.398
- type: mrr_at_3
value: 19.187
- type: mrr_at_5
value: 20.383000000000003
- type: ndcg_at_1
value: 15.123000000000001
- type: ndcg_at_10
value: 16.957
- type: ndcg_at_100
value: 22.147
- type: ndcg_at_1000
value: 26.759
- type: ndcg_at_3
value: 14.091000000000001
- type: ndcg_at_5
value: 15.135000000000002
- type: precision_at_1
value: 15.123000000000001
- type: precision_at_10
value: 4.938
- type: precision_at_100
value: 1.019
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 9.568
- type: precision_at_5
value: 7.438000000000001
- type: recall_at_1
value: 7.446999999999999
- type: recall_at_10
value: 22.094
- type: recall_at_100
value: 42.397
- type: recall_at_1000
value: 71.15700000000001
- type: recall_at_3
value: 12.879
- type: recall_at_5
value: 16.49
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: None
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 18.285
- type: map_at_10
value: 25.703
- type: map_at_100
value: 26.634
- type: map_at_1000
value: 26.741999999999997
- type: map_at_3
value: 23.642
- type: map_at_5
value: 24.826
- type: mrr_at_1
value: 36.57
- type: mrr_at_10
value: 43.772
- type: mrr_at_100
value: 44.51
- type: mrr_at_1000
value: 44.561
- type: mrr_at_3
value: 41.787
- type: mrr_at_5
value: 42.964
- type: ndcg_at_1
value: 36.57
- type: ndcg_at_10
value: 32.763999999999996
- type: ndcg_at_100
value: 37.077
- type: ndcg_at_1000
value: 39.666000000000004
- type: ndcg_at_3
value: 28.906
- type: ndcg_at_5
value: 30.86
- type: precision_at_1
value: 36.57
- type: precision_at_10
value: 7.202
- type: precision_at_100
value: 1.065
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 18.231
- type: precision_at_5
value: 12.483
- type: recall_at_1
value: 18.285
- type: recall_at_10
value: 36.009
- type: recall_at_100
value: 53.27499999999999
- type: recall_at_1000
value: 70.635
- type: recall_at_3
value: 27.345999999999997
- type: recall_at_5
value: 31.208999999999996
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: None
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 61.361999999999995
- type: ap
value: 57.09674595597791
- type: f1
value: 60.94720401382382
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: None
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 6.686
- type: map_at_10
value: 11.454
- type: map_at_100
value: 12.342
- type: map_at_1000
value: 12.447
- type: map_at_3
value: 9.722
- type: map_at_5
value: 10.632
- type: mrr_at_1
value: 6.891
- type: mrr_at_10
value: 11.768
- type: mrr_at_100
value: 12.651000000000002
- type: mrr_at_1000
value: 12.753
- type: mrr_at_3
value: 10.001999999999999
- type: mrr_at_5
value: 10.918999999999999
- type: ndcg_at_1
value: 6.848
- type: ndcg_at_10
value: 14.466000000000001
- type: ndcg_at_100
value: 19.301
- type: ndcg_at_1000
value: 22.458
- type: ndcg_at_3
value: 10.836
- type: ndcg_at_5
value: 12.475
- type: precision_at_1
value: 6.848
- type: precision_at_10
value: 2.48
- type: precision_at_100
value: 0.49899999999999994
- type: precision_at_1000
value: 0.077
- type: precision_at_3
value: 4.766
- type: precision_at_5
value: 3.682
- type: recall_at_1
value: 6.686
- type: recall_at_10
value: 23.82
- type: recall_at_100
value: 47.349999999999994
- type: recall_at_1000
value: 72.66
- type: recall_at_3
value: 13.811000000000002
- type: recall_at_5
value: 17.76
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: None
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.67213862289103
- type: f1
value: 86.45841301738238
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: None
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 56.82170542635659
- type: f1
value: 39.12615117855274
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: None
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.18829858776058
- type: f1
value: 58.617914607265064
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: None
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.45595158036315
- type: f1
value: 64.9778374481982
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: None
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 29.531989286141012
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: None
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 26.070324322784792
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: None
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.516858081965257
- type: mrr
value: 30.51047930520146
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: None
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 4.131
- type: map_at_10
value: 8.826
- type: map_at_100
value: 11.094999999999999
- type: map_at_1000
value: 12.484
- type: map_at_3
value: 6.723
- type: map_at_5
value: 7.683
- type: mrr_at_1
value: 34.985
- type: mrr_at_10
value: 44.921
- type: mrr_at_100
value: 45.62
- type: mrr_at_1000
value: 45.676
- type: mrr_at_3
value: 42.931000000000004
- type: mrr_at_5
value: 44.385999999999996
- type: ndcg_at_1
value: 32.507999999999996
- type: ndcg_at_10
value: 26.773000000000003
- type: ndcg_at_100
value: 24.751
- type: ndcg_at_1000
value: 34.19
- type: ndcg_at_3
value: 31.213
- type: ndcg_at_5
value: 29.249000000000002
- type: precision_at_1
value: 34.985
- type: precision_at_10
value: 20.247999999999998
- type: precision_at_100
value: 6.907000000000001
- type: precision_at_1000
value: 2.031
- type: precision_at_3
value: 30.341
- type: precision_at_5
value: 25.759
- type: recall_at_1
value: 4.131
- type: recall_at_10
value: 12.465
- type: recall_at_100
value: 25.776
- type: recall_at_1000
value: 59.876
- type: recall_at_3
value: 7.968
- type: recall_at_5
value: 9.968
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: None
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 9.277000000000001
- type: map_at_10
value: 15.709999999999999
- type: map_at_100
value: 16.980999999999998
- type: map_at_1000
value: 17.074
- type: map_at_3
value: 13.157
- type: map_at_5
value: 14.571000000000002
- type: mrr_at_1
value: 10.574
- type: mrr_at_10
value: 17.344
- type: mrr_at_100
value: 18.506
- type: mrr_at_1000
value: 18.584999999999997
- type: mrr_at_3
value: 14.677000000000001
- type: mrr_at_5
value: 16.213
- type: ndcg_at_1
value: 10.574
- type: ndcg_at_10
value: 20.044
- type: ndcg_at_100
value: 26.447
- type: ndcg_at_1000
value: 29.084
- type: ndcg_at_3
value: 14.787
- type: ndcg_at_5
value: 17.362
- type: precision_at_1
value: 10.574
- type: precision_at_10
value: 3.7600000000000002
- type: precision_at_100
value: 0.738
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 6.933
- type: precision_at_5
value: 5.608
- type: recall_at_1
value: 9.277000000000001
- type: recall_at_10
value: 31.948
- type: recall_at_100
value: 61.708
- type: recall_at_1000
value: 82.07799999999999
- type: recall_at_3
value: 18.045
- type: recall_at_5
value: 24.038999999999998
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: None
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 64.268
- type: map_at_10
value: 77.19500000000001
- type: map_at_100
value: 77.95299999999999
- type: map_at_1000
value: 77.986
- type: map_at_3
value: 74.30499999999999
- type: map_at_5
value: 76.054
- type: mrr_at_1
value: 74.09
- type: mrr_at_10
value: 81.384
- type: mrr_at_100
value: 81.592
- type: mrr_at_1000
value: 81.597
- type: mrr_at_3
value: 80.00500000000001
- type: mrr_at_5
value: 80.876
- type: ndcg_at_1
value: 74.16
- type: ndcg_at_10
value: 81.813
- type: ndcg_at_100
value: 83.787
- type: ndcg_at_1000
value: 84.11800000000001
- type: ndcg_at_3
value: 78.389
- type: ndcg_at_5
value: 80.123
- type: precision_at_1
value: 74.16
- type: precision_at_10
value: 12.35
- type: precision_at_100
value: 1.466
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.067
- type: precision_at_5
value: 22.442
- type: recall_at_1
value: 64.268
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 97.935
- type: recall_at_1000
value: 99.703
- type: recall_at_3
value: 80.752
- type: recall_at_5
value: 85.63300000000001
- type: map_at_1
value: 2.948
- type: map_at_10
value: 7.126
- type: map_at_100
value: 8.462
- type: map_at_1000
value: 8.713
- type: map_at_3
value: 5.143
- type: map_at_5
value: 6.117
- type: mrr_at_1
value: 14.499999999999998
- type: mrr_at_10
value: 22.455
- type: mrr_at_100
value: 23.666
- type: mrr_at_1000
value: 23.745
- type: mrr_at_3
value: 19.417
- type: mrr_at_5
value: 21.117
- type: ndcg_at_1
value: 14.499999999999998
- type: ndcg_at_10
value: 12.666
- type: ndcg_at_100
value: 18.993
- type: ndcg_at_1000
value: 24.09
- type: ndcg_at_3
value: 11.655999999999999
- type: ndcg_at_5
value: 10.342
- type: precision_at_1
value: 14.499999999999998
- type: precision_at_10
value: 6.65
- type: precision_at_100
value: 1.598
- type: precision_at_1000
value: 0.28300000000000003
- type: precision_at_3
value: 10.8
- type: precision_at_5
value: 9.1
- type: recall_at_1
value: 2.948
- type: recall_at_10
value: 13.492
- type: recall_at_100
value: 32.448
- type: recall_at_1000
value: 57.553
- type: recall_at_3
value: 6.578
- type: recall_at_5
value: 9.242
- type: map_at_1
value: 0.129
- type: map_at_10
value: 0.6890000000000001
- type: map_at_100
value: 3.511
- type: map_at_1000
value: 8.943
- type: map_at_3
value: 0.304
- type: map_at_5
value: 0.42700000000000005
- type: mrr_at_1
value: 56.00000000000001
- type: mrr_at_10
value: 65.908
- type: mrr_at_100
value: 66.60199999999999
- type: mrr_at_1000
value: 66.60199999999999
- type: mrr_at_3
value: 63.333
- type: mrr_at_5
value: 64.23299999999999
- type: ndcg_at_1
value: 51.0
- type: ndcg_at_10
value: 39.304
- type: ndcg_at_100
value: 29.392000000000003
- type: ndcg_at_1000
value: 26.044
- type: ndcg_at_3
value: 45.408
- type: ndcg_at_5
value: 41.997
- type: precision_at_1
value: 56.00000000000001
- type: precision_at_10
value: 40.8
- type: precision_at_100
value: 30.48
- type: precision_at_1000
value: 12.692
- type: precision_at_3
value: 48.0
- type: precision_at_5
value: 43.6
- type: recall_at_1
value: 0.129
- type: recall_at_10
value: 0.893
- type: recall_at_100
value: 6.324000000000001
- type: recall_at_1000
value: 24.964
- type: recall_at_3
value: 0.33999999999999997
- type: recall_at_5
value: 0.505
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: None
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 39.191626251430044
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: None
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 47.00784930616429
- task:
type: STS
dataset:
name: MTEB SICK-R
type: None
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 75.38146213095916
- type: cos_sim_spearman
value: 65.36914729991646
- type: euclidean_pearson
value: 70.34893420889419
- type: euclidean_spearman
value: 65.36925972117625
- type: manhattan_pearson
value: 68.16816720045782
- type: manhattan_spearman
value: 64.0884396246228
- task:
type: STS
dataset:
name: MTEB STS12
type: None
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 74.45813185900207
- type: cos_sim_spearman
value: 68.03206487736479
- type: euclidean_pearson
value: 70.55331228911669
- type: euclidean_spearman
value: 68.03330456319067
- type: manhattan_pearson
value: 68.32513309931606
- type: manhattan_spearman
value: 66.90519361570585
- task:
type: STS
dataset:
name: MTEB STS13
type: None
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.32203252223916
- type: cos_sim_spearman
value: 78.44952447167366
- type: euclidean_pearson
value: 78.18870184193474
- type: euclidean_spearman
value: 78.44956228059971
- type: manhattan_pearson
value: 77.82417744157945
- type: manhattan_spearman
value: 78.17317129725184
- task:
type: STS
dataset:
name: MTEB STS14
type: None
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 77.98515479114604
- type: cos_sim_spearman
value: 74.70914230860409
- type: euclidean_pearson
value: 76.81874418213698
- type: euclidean_spearman
value: 74.70913261737951
- type: manhattan_pearson
value: 75.54410520012546
- type: manhattan_spearman
value: 73.74596322038998
- task:
type: STS
dataset:
name: MTEB STS15
type: None
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 81.34912352314105
- type: cos_sim_spearman
value: 82.13479378308254
- type: euclidean_pearson
value: 82.07291865315551
- type: euclidean_spearman
value: 82.13479226815167
- type: manhattan_pearson
value: 81.51909627091456
- type: manhattan_spearman
value: 81.70075499671213
- task:
type: STS
dataset:
name: MTEB STS16
type: None
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 76.1116492955691
- type: cos_sim_spearman
value: 77.19800116078945
- type: euclidean_pearson
value: 76.8231316467101
- type: euclidean_spearman
value: 77.19883015620502
- type: manhattan_pearson
value: 77.10588536013977
- type: manhattan_spearman
value: 77.50215416532438
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: None
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.10770264372314
- type: cos_sim_spearman
value: 84.97403648808209
- type: euclidean_pearson
value: 84.41825024902698
- type: euclidean_spearman
value: 84.97491009412074
- type: manhattan_pearson
value: 84.16827578787243
- type: manhattan_spearman
value: 84.92739867128569
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: None
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 62.91807215204278
- type: cos_sim_spearman
value: 59.61282196137074
- type: euclidean_pearson
value: 62.702286829442436
- type: euclidean_spearman
value: 59.61282196137074
- type: manhattan_pearson
value: 62.26491120673072
- type: manhattan_spearman
value: 59.7161013914999
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: None
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 78.43114398442724
- type: cos_sim_spearman
value: 77.35423527756463
- type: euclidean_pearson
value: 78.2269102978861
- type: euclidean_spearman
value: 77.35428366374488
- type: manhattan_pearson
value: 77.26973789544932
- type: manhattan_spearman
value: 76.58307796792111
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: None
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 72.61932743075317
- type: mrr
value: 91.38920810489437
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: None
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 36.861
- type: map_at_10
value: 45.93
- type: map_at_100
value: 46.861000000000004
- type: map_at_1000
value: 46.924
- type: map_at_3
value: 43.283
- type: map_at_5
value: 44.675
- type: mrr_at_1
value: 39.333
- type: mrr_at_10
value: 47.906
- type: mrr_at_100
value: 48.665000000000006
- type: mrr_at_1000
value: 48.722
- type: mrr_at_3
value: 45.611000000000004
- type: mrr_at_5
value: 46.778
- type: ndcg_at_1
value: 39.333
- type: ndcg_at_10
value: 50.970000000000006
- type: ndcg_at_100
value: 55.491
- type: ndcg_at_1000
value: 57.099
- type: ndcg_at_3
value: 45.837
- type: ndcg_at_5
value: 48.081
- type: precision_at_1
value: 39.333
- type: precision_at_10
value: 7.199999999999999
- type: precision_at_100
value: 0.9730000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 18.333
- type: precision_at_5
value: 12.4
- type: recall_at_1
value: 36.861
- type: recall_at_10
value: 64.839
- type: recall_at_100
value: 85.983
- type: recall_at_1000
value: 98.467
- type: recall_at_3
value: 50.678
- type: recall_at_5
value: 56.24400000000001
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: None
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.75742574257426
- type: cos_sim_ap
value: 93.4938177052363
- type: cos_sim_f1
value: 87.45019920318725
- type: cos_sim_precision
value: 87.10317460317461
- type: cos_sim_recall
value: 87.8
- type: dot_accuracy
value: 99.75742574257426
- type: dot_ap
value: 93.4938177052363
- type: dot_f1
value: 87.45019920318725
- type: dot_precision
value: 87.10317460317461
- type: dot_recall
value: 87.8
- type: euclidean_accuracy
value: 99.75742574257426
- type: euclidean_ap
value: 93.4938177052363
- type: euclidean_f1
value: 87.45019920318725
- type: euclidean_precision
value: 87.10317460317461
- type: euclidean_recall
value: 87.8
- type: manhattan_accuracy
value: 99.77425742574258
- type: manhattan_ap
value: 94.11582049960462
- type: manhattan_f1
value: 88.3367139959432
- type: manhattan_precision
value: 89.60905349794238
- type: manhattan_recall
value: 87.1
- type: max_accuracy
value: 99.77425742574258
- type: max_ap
value: 94.11582049960462
- type: max_f1
value: 88.3367139959432
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: None
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 40.69098529569445
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: None
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 29.68544212745689
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: None
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 43.065922067847836
- type: mrr
value: 43.64432136490961
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: None
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.750957237960492
- type: cos_sim_spearman
value: 30.099771071145582
- type: dot_pearson
value: 29.75095720371408
- type: dot_spearman
value: 30.128683537072114
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: None
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 1.6179999999999999
- type: map_at_10
value: 8.232000000000001
- type: map_at_100
value: 14.643999999999998
- type: map_at_1000
value: 16.259
- type: map_at_3
value: 4.2090000000000005
- type: map_at_5
value: 5.401
- type: mrr_at_1
value: 24.490000000000002
- type: mrr_at_10
value: 43.963
- type: mrr_at_100
value: 45.022
- type: mrr_at_1000
value: 45.039
- type: mrr_at_3
value: 42.177
- type: mrr_at_5
value: 42.687000000000005
- type: ndcg_at_1
value: 23.469
- type: ndcg_at_10
value: 22.526
- type: ndcg_at_100
value: 36.411
- type: ndcg_at_1000
value: 47.461
- type: ndcg_at_3
value: 27.176000000000002
- type: ndcg_at_5
value: 23.787
- type: precision_at_1
value: 24.490000000000002
- type: precision_at_10
value: 20.0
- type: precision_at_100
value: 8.286
- type: precision_at_1000
value: 1.543
- type: precision_at_3
value: 29.252
- type: precision_at_5
value: 23.265
- type: recall_at_1
value: 1.6179999999999999
- type: recall_at_10
value: 14.443
- type: recall_at_100
value: 50.073
- type: recall_at_1000
value: 83.56700000000001
- type: recall_at_3
value: 5.831
- type: recall_at_5
value: 7.797
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: None
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.5932
- type: ap
value: 13.748287764670659
- type: f1
value: 53.6121537777008
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: None
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 48.60498019241653
- type: f1
value: 48.8190614849162
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: None
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 37.40279692338929
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: None
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.11378673183525
- type: cos_sim_ap
value: 63.412004464549696
- type: cos_sim_f1
value: 60.880921286952386
- type: cos_sim_precision
value: 55.34211094323332
- type: cos_sim_recall
value: 67.65171503957784
- type: dot_accuracy
value: 83.11378673183525
- type: dot_ap
value: 63.412004464549696
- type: dot_f1
value: 60.880921286952386
- type: dot_precision
value: 55.34211094323332
- type: dot_recall
value: 67.65171503957784
- type: euclidean_accuracy
value: 83.11378673183525
- type: euclidean_ap
value: 63.412004464549696
- type: euclidean_f1
value: 60.880921286952386
- type: euclidean_precision
value: 55.34211094323332
- type: euclidean_recall
value: 67.65171503957784
- type: manhattan_accuracy
value: 82.13625797222389
- type: manhattan_ap
value: 60.704142220415335
- type: manhattan_f1
value: 58.10686319668357
- type: manhattan_precision
value: 51.55292194523907
- type: manhattan_recall
value: 66.56992084432719
- type: max_accuracy
value: 83.11378673183525
- type: max_ap
value: 63.412004464549696
- type: max_f1
value: 60.880921286952386
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: None
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.48593161796096
- type: cos_sim_ap
value: 83.09276630048417
- type: cos_sim_f1
value: 75.22376690154258
- type: cos_sim_precision
value: 74.4328031958996
- type: cos_sim_recall
value: 76.03172158915923
- type: dot_accuracy
value: 87.48593161796096
- type: dot_ap
value: 83.09276630048417
- type: dot_f1
value: 75.22376690154258
- type: dot_precision
value: 74.4328031958996
- type: dot_recall
value: 76.03172158915923
- type: euclidean_accuracy
value: 87.48593161796096
- type: euclidean_ap
value: 83.09276683624702
- type: euclidean_f1
value: 75.22376690154258
- type: euclidean_precision
value: 74.4328031958996
- type: euclidean_recall
value: 76.03172158915923
- type: manhattan_accuracy
value: 87.49369348391353
- type: manhattan_ap
value: 82.94869347657408
- type: manhattan_f1
value: 74.95875695376942
- type: manhattan_precision
value: 74.70367821365757
- type: manhattan_recall
value: 75.21558361564522
- type: max_accuracy
value: 87.49369348391353
- type: max_ap
value: 83.09276683624702
- type: max_f1
value: 75.22376690154258
---
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
medspaner/roberta-es-clinical-trials-attributes-ner | medspaner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705 | 1,727 | 76 | 1 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: 'Criterios de exclusión: antecedentes de infarto, mujer sin métodos anticonceptivos
adecuados; cirugía programada; padre o madre con cardiopatía.'
model-index:
- name: roberta-es-clinical-trials-attributes-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-es-clinical-trials-attributes-ner
This medical named entity recognition model detects event temporality and experiencer attributes:
- Future: e.g. *cirugía pendiente*, 'pending surgery'.
- History\_of: e.g. *antecedentes de ataque al corazón*, 'history of heart attack'.
- Family\_member: e.g. *hermano*, 'brother'.
- Patient: e.g. *paciente pediátrico*, 'pediatric patient'.
- Other: e.g. *enfermero*, 'nurse'.
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.877 (±0.009)
- Recall: 0.835 (±0.008)
- F1: 0.856 (±0.006)
- Accuracy: 0.989 (±0.001)
## Model description
This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/).
It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials.
The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
If you use this model, please, cite as follows:
```
@article{campillosetal2024,
title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
year={2024},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: average 10.8 epochs (±4.09); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5)
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.877 (±0.003) | 0.835 (±0.008) | 0.856 (±0.006) | 0.989 (±0.001) |
**Results per class (test set; average and standard deviation of 5 rounds with different seeds)**
| Class | Precision | Recall | F1 | Support |
|:--------------:|:--------------:|:--------------:|:--------------:|:---------:|
| Future | 0.640 (±0.040) | 0.620 (±0.059) | 0.629 (±0.045) | 70 |
| History\_of | 0.742 (±0.021) | 0.667 (±0.016) | 0.703 (±0.010) | 647 |
| Patient | 0.949 (±0.003) | 0.921 (±0.005) | 0.935 (±0.003) | 1462 |
| Family\_member | 0.721 (±0.048) | 0.920 (±0.027) | 0.808 (±0.034) | 20 |
| Other | 0.852 (±0.019) | 0.805 (±0.015) | 0.828 (±0.011) | 120 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] | BioNLP |
RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-awq | RichardErkhov | null | [
"safetensors",
"gpt_neox",
"arxiv:2101.00027",
"arxiv:2201.07311",
"4-bit",
"awq",
"region:us"
] | 1,734 | 1,734 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-deduped-v0 - AWQ
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] | Non_BioNLP |
philmas/cese5020-contrastive-model | philmas | sentence-similarity | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:170845",
"loss:BatchHardTripletLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:Alibaba-NLP/gte-base-en-v1.5",
"base_model:finetune:Alibaba-NLP/gte-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,736 | 1,736 | 6 | 0 | ---
base_model: Alibaba-NLP/gte-base-en-v1.5
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:170845
- loss:BatchHardTripletLoss
widget:
- source_sentence: Radionuclide therapy for targeted cancer treatment
sentences:
- Reinforced fish plates with a zinc coating for tropical climates
- Tantalum alloy powders for hardened steel production.
- Kraft pulp used extensively in the production of kraft paper and bags.
- source_sentence: Reflective woven fabric, 100 cm width, 200 meters, designed for
safety gear
sentences:
- Articulated link chains used in conveyor systems, made from high-carbon steel.
- M5 x 45mm flat head wood screws, perfect for flush mounting.
- 15" Mini Dovetail Saw - Small size for fine dovetail work.
- source_sentence: Filament yarn, viscose rayon, not for retail, 64 dtex, twisted
sentences:
- 72.5kV Solid Dielectric Circuit Breaker - Eco-Friendly Solution
- High-quality warp-knit fabric for technical applications, made from natural fibers.
- Animal motifs silk handkerchief, luxurious, 18x18 cm
- source_sentence: Kiln-dried hemlock, available for various craft applications.
sentences:
- Seasoned cockles for appetizers, 150g
- Vitamin B12 + Calcium Tablets, 250 mcg, 100 Count
- Boys' formal suit in burgundy, featuring a synthetic fabric with a slight luster.
- source_sentence: Aluminium alloy wire, 10mm diameter, used in electrical transmission.
sentences:
- Cocoa mass, suitable for coating confectioneries.
- Sunnah Dates - High-quality, nutrient-rich dates ideal for religious celebrations
and dietary needs.
- Zika Virus Antigen Detection Kit - A laboratory reagent for the rapid detection
of Zika virus antigens in human samples.
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
results:
- task:
type: triplet
name: Triplet
dataset:
name: eval
type: eval
metrics:
- type: cosine_accuracy
value: 0.8725334869652888
name: Cosine Accuracy
---
# SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) <!-- at revision a829fd0e060bb84554da0dfd354d0de0f7712b7f -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("philmas/cese5020-contrastive-model")
# Run inference
sentences = [
'Aluminium alloy wire, 10mm diameter, used in electrical transmission.',
'Sunnah Dates - High-quality, nutrient-rich dates ideal for religious celebrations and dietary needs.',
'Cocoa mass, suitable for coating confectioneries.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `eval`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.8725** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 170,845 training samples
* Columns: <code>text</code> and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | text | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | int |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.77 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>6: ~0.20%</li><li>9: ~0.10%</li><li>17: ~0.10%</li><li>20: ~0.10%</li><li>22: ~0.10%</li><li>25: ~0.10%</li><li>28: ~0.10%</li><li>32: ~0.10%</li><li>53: ~0.10%</li><li>73: ~0.10%</li><li>81: ~0.10%</li><li>86: ~0.10%</li><li>91: ~0.10%</li><li>98: ~0.10%</li><li>100: ~0.10%</li><li>108: ~0.20%</li><li>116: ~0.10%</li><li>118: ~0.10%</li><li>119: ~0.10%</li><li>120: ~0.10%</li><li>122: ~0.10%</li><li>127: ~0.10%</li><li>131: ~0.10%</li><li>132: ~0.10%</li><li>134: ~0.10%</li><li>138: ~0.10%</li><li>139: ~0.10%</li><li>140: ~0.10%</li><li>146: ~0.10%</li><li>168: ~0.10%</li><li>170: ~0.10%</li><li>175: ~0.10%</li><li>179: ~0.20%</li><li>186: ~0.10%</li><li>187: ~0.10%</li><li>192: ~0.10%</li><li>193: ~0.10%</li><li>194: ~0.20%</li><li>199: ~0.10%</li><li>203: ~0.10%</li><li>213: ~0.10%</li><li>221: ~0.10%</li><li>235: ~0.20%</li><li>246: ~0.10%</li><li>247: ~0.20%</li><li>255: ~0.20%</li><li>258: ~0.10%</li><li>263: ~0.20%</li><li>266: ~0.10%</li><li>267: ~0.10%</li><li>274: ~0.10%</li><li>277: ~0.10%</li><li>288: ~0.10%</li><li>290: ~0.10%</li><li>298: ~0.20%</li><li>300: ~0.10%</li><li>304: ~0.10%</li><li>306: ~0.10%</li><li>308: ~0.10%</li><li>311: ~0.10%</li><li>317: ~0.10%</li><li>321: ~0.10%</li><li>331: ~0.10%</li><li>340: ~0.10%</li><li>345: ~0.10%</li><li>346: ~0.10%</li><li>347: ~0.10%</li><li>361: ~0.10%</li><li>370: ~0.10%</li><li>373: ~0.10%</li><li>390: ~0.30%</li><li>403: ~0.10%</li><li>409: ~0.10%</li><li>411: ~0.10%</li><li>414: ~0.10%</li><li>432: ~0.10%</li><li>433: ~0.20%</li><li>435: ~0.10%</li><li>447: ~0.10%</li><li>449: ~0.10%</li><li>451: ~0.10%</li><li>460: ~0.10%</li><li>464: ~0.10%</li><li>478: ~0.10%</li><li>480: ~0.10%</li><li>482: ~0.10%</li><li>484: ~0.10%</li><li>496: ~0.10%</li><li>506: ~0.10%</li><li>509: ~0.10%</li><li>513: ~0.10%</li><li>520: ~0.10%</li><li>527: ~0.10%</li><li>531: ~0.10%</li><li>535: ~0.10%</li><li>541: ~0.10%</li><li>546: ~0.10%</li><li>552: ~0.10%</li><li>556: ~0.20%</li><li>576: ~0.10%</li><li>580: ~0.10%</li><li>581: ~0.10%</li><li>583: ~0.10%</li><li>584: ~0.10%</li><li>587: ~0.10%</li><li>596: ~0.10%</li><li>601: ~0.10%</li><li>602: ~0.10%</li><li>603: ~0.20%</li><li>605: ~0.10%</li><li>608: ~0.10%</li><li>612: ~0.10%</li><li>620: ~0.10%</li><li>625: ~0.10%</li><li>628: ~0.10%</li><li>641: ~0.10%</li><li>645: ~0.10%</li><li>650: ~0.10%</li><li>656: ~0.10%</li><li>659: ~0.10%</li><li>660: ~0.20%</li><li>661: ~0.10%</li><li>666: ~0.20%</li><li>668: ~0.10%</li><li>670: ~0.10%</li><li>688: ~0.10%</li><li>690: ~0.10%</li><li>696: ~0.20%</li><li>711: ~0.10%</li><li>713: ~0.10%</li><li>714: ~0.10%</li><li>726: ~0.20%</li><li>727: ~0.10%</li><li>735: ~0.10%</li><li>736: ~0.10%</li><li>737: ~0.10%</li><li>740: ~0.10%</li><li>741: ~0.10%</li><li>744: ~0.10%</li><li>746: ~0.10%</li><li>754: ~0.10%</li><li>756: ~0.20%</li><li>757: ~0.20%</li><li>762: ~0.10%</li><li>763: ~0.10%</li><li>768: ~0.10%</li><li>776: ~0.10%</li><li>781: ~0.10%</li><li>787: ~0.10%</li><li>789: ~0.10%</li><li>791: ~0.10%</li><li>806: ~0.10%</li><li>809: ~0.10%</li><li>814: ~0.10%</li><li>820: ~0.10%</li><li>824: ~0.20%</li><li>833: ~0.10%</li><li>837: ~0.10%</li><li>857: ~0.10%</li><li>858: ~0.20%</li><li>860: ~0.10%</li><li>879: ~0.10%</li><li>882: ~0.10%</li><li>893: ~0.10%</li><li>895: ~0.10%</li><li>899: ~0.10%</li><li>919: ~0.10%</li><li>925: ~0.10%</li><li>926: ~0.10%</li><li>945: ~0.10%</li><li>957: ~0.10%</li><li>959: ~0.10%</li><li>973: ~0.10%</li><li>978: ~0.10%</li><li>981: ~0.20%</li><li>982: ~0.10%</li><li>991: ~0.10%</li><li>996: ~0.10%</li><li>1009: ~0.10%</li><li>1012: ~0.10%</li><li>1014: ~0.10%</li><li>1015: ~0.10%</li><li>1017: ~0.10%</li><li>1020: ~0.10%</li><li>1024: ~0.10%</li><li>1030: ~0.10%</li><li>1034: ~0.10%</li><li>1036: ~0.10%</li><li>1038: ~0.10%</li><li>1039: ~0.10%</li><li>1047: ~0.10%</li><li>1049: ~0.10%</li><li>1051: ~0.10%</li><li>1054: ~0.10%</li><li>1063: ~0.10%</li><li>1069: ~0.10%</li><li>1070: ~0.10%</li><li>1072: ~0.10%</li><li>1074: ~0.10%</li><li>1081: ~0.10%</li><li>1097: ~0.10%</li><li>1102: ~0.10%</li><li>1109: ~0.10%</li><li>1113: ~0.10%</li><li>1135: ~0.10%</li><li>1139: ~0.10%</li><li>1155: ~0.10%</li><li>1161: ~0.10%</li><li>1176: ~0.10%</li><li>1180: ~0.20%</li><li>1181: ~0.10%</li><li>1188: ~0.10%</li><li>1198: ~0.10%</li><li>1200: ~0.10%</li><li>1215: ~0.10%</li><li>1224: ~0.10%</li><li>1227: ~0.10%</li><li>1229: ~0.10%</li><li>1235: ~0.10%</li><li>1249: ~0.10%</li><li>1265: ~0.10%</li><li>1268: ~0.10%</li><li>1275: ~0.20%</li><li>1282: ~0.10%</li><li>1286: ~0.10%</li><li>1289: ~0.10%</li><li>1290: ~0.10%</li><li>1296: ~0.10%</li><li>1309: ~0.10%</li><li>1318: ~0.10%</li><li>1322: ~0.10%</li><li>1323: ~0.10%</li><li>1327: ~0.10%</li><li>1338: ~0.10%</li><li>1342: ~0.10%</li><li>1345: ~0.10%</li><li>1347: ~0.10%</li><li>1348: ~0.10%</li><li>1350: ~0.10%</li><li>1352: ~0.10%</li><li>1354: ~0.10%</li><li>1362: ~0.10%</li><li>1367: ~0.10%</li><li>1371: ~0.10%</li><li>1375: ~0.10%</li><li>1377: ~0.10%</li><li>1392: ~0.10%</li><li>1397: ~0.10%</li><li>1400: ~0.10%</li><li>1401: ~0.10%</li><li>1408: ~0.10%</li><li>1411: ~0.10%</li><li>1419: ~0.10%</li><li>1424: ~0.10%</li><li>1429: ~0.10%</li><li>1445: ~0.10%</li><li>1446: ~0.10%</li><li>1450: ~0.10%</li><li>1454: ~0.10%</li><li>1455: ~0.10%</li><li>1462: ~0.10%</li><li>1467: ~0.10%</li><li>1468: ~0.10%</li><li>1469: ~0.10%</li><li>1471: ~0.10%</li><li>1514: ~0.10%</li><li>1518: ~0.10%</li><li>1522: ~0.10%</li><li>1524: ~0.10%</li><li>1529: ~0.10%</li><li>1531: ~0.10%</li><li>1534: ~0.10%</li><li>1553: ~0.10%</li><li>1560: ~0.10%</li><li>1563: ~0.10%</li><li>1564: ~0.10%</li><li>1566: ~0.10%</li><li>1574: ~0.10%</li><li>1578: ~0.10%</li><li>1580: ~0.10%</li><li>1581: ~0.20%</li><li>1587: ~0.10%</li><li>1598: ~0.20%</li><li>1609: ~0.10%</li><li>1610: ~0.10%</li><li>1616: ~0.10%</li><li>1632: ~0.10%</li><li>1634: ~0.10%</li><li>1645: ~0.10%</li><li>1646: ~0.10%</li><li>1650: ~0.10%</li><li>1654: ~0.10%</li><li>1660: ~0.10%</li><li>1661: ~0.20%</li><li>1662: ~0.10%</li><li>1664: ~0.10%</li><li>1671: ~0.10%</li><li>1680: ~0.10%</li><li>1686: ~0.10%</li><li>1688: ~0.10%</li><li>1693: ~0.10%</li><li>1698: ~0.10%</li><li>1702: ~0.10%</li><li>1710: ~0.10%</li><li>1726: ~0.10%</li><li>1727: ~0.10%</li><li>1730: ~0.10%</li><li>1731: ~0.10%</li><li>1741: ~0.10%</li><li>1743: ~0.10%</li><li>1744: ~0.10%</li><li>1749: ~0.10%</li><li>1755: ~0.10%</li><li>1760: ~0.10%</li><li>1766: ~0.10%</li><li>1769: ~0.10%</li><li>1785: ~0.10%</li><li>1789: ~0.10%</li><li>1790: ~0.10%</li><li>1791: ~0.10%</li><li>1798: ~0.10%</li><li>1800: ~0.10%</li><li>1804: ~0.10%</li><li>1805: ~0.10%</li><li>1811: ~0.10%</li><li>1814: ~0.10%</li><li>1828: ~0.10%</li><li>1836: ~0.10%</li><li>1838: ~0.10%</li><li>1842: ~0.10%</li><li>1847: ~0.10%</li><li>1860: ~0.10%</li><li>1861: ~0.10%</li><li>1863: ~0.10%</li><li>1864: ~0.10%</li><li>1869: ~0.10%</li><li>1879: ~0.10%</li><li>1882: ~0.10%</li><li>1886: ~0.10%</li><li>1891: ~0.10%</li><li>1897: ~0.10%</li><li>1910: ~0.10%</li><li>1913: ~0.20%</li><li>1928: ~0.10%</li><li>1932: ~0.20%</li><li>1933: ~0.10%</li><li>1943: ~0.10%</li><li>1944: ~0.10%</li><li>1960: ~0.10%</li><li>1965: ~0.10%</li><li>1968: ~0.10%</li><li>1971: ~0.10%</li><li>1979: ~0.10%</li><li>1981: ~0.10%</li><li>1984: ~0.10%</li><li>1985: ~0.20%</li><li>1986: ~0.10%</li><li>1991: ~0.10%</li><li>1992: ~0.10%</li><li>1998: ~0.10%</li><li>1999: ~0.10%</li><li>2000: ~0.10%</li><li>2004: ~0.10%</li><li>2006: ~0.20%</li><li>2019: ~0.10%</li><li>2026: ~0.20%</li><li>2050: ~0.10%</li><li>2073: ~0.10%</li><li>2082: ~0.10%</li><li>2093: ~0.10%</li><li>2117: ~0.10%</li><li>2119: ~0.10%</li><li>2128: ~0.10%</li><li>2130: ~0.10%</li><li>2136: ~0.10%</li><li>2141: ~0.10%</li><li>2143: ~0.10%</li><li>2149: ~0.10%</li><li>2151: ~0.10%</li><li>2153: ~0.10%</li><li>2155: ~0.10%</li><li>2161: ~0.20%</li><li>2171: ~0.10%</li><li>2178: ~0.10%</li><li>2180: ~0.10%</li><li>2191: ~0.10%</li><li>2201: ~0.10%</li><li>2206: ~0.10%</li><li>2207: ~0.10%</li><li>2210: ~0.10%</li><li>2213: ~0.10%</li><li>2226: ~0.10%</li><li>2238: ~0.10%</li><li>2242: ~0.10%</li><li>2250: ~0.20%</li><li>2257: ~0.10%</li><li>2263: ~0.10%</li><li>2264: ~0.20%</li><li>2272: ~0.10%</li><li>2273: ~0.10%</li><li>2282: ~0.10%</li><li>2287: ~0.10%</li><li>2288: ~0.10%</li><li>2292: ~0.10%</li><li>2308: ~0.20%</li><li>2331: ~0.10%</li><li>2339: ~0.10%</li><li>2341: ~0.20%</li><li>2342: ~0.10%</li><li>2354: ~0.20%</li><li>2357: ~0.10%</li><li>2358: ~0.10%</li><li>2363: ~0.10%</li><li>2368: ~0.10%</li><li>2372: ~0.10%</li><li>2379: ~0.10%</li><li>2381: ~0.10%</li><li>2384: ~0.10%</li><li>2386: ~0.10%</li><li>2396: ~0.10%</li><li>2399: ~0.10%</li><li>2402: ~0.10%</li><li>2403: ~0.10%</li><li>2406: ~0.10%</li><li>2411: ~0.10%</li><li>2413: ~0.10%</li><li>2427: ~0.10%</li><li>2434: ~0.10%</li><li>2437: ~0.10%</li><li>2446: ~0.10%</li><li>2451: ~0.10%</li><li>2463: ~0.10%</li><li>2466: ~0.10%</li><li>2468: ~0.10%</li><li>2470: ~0.10%</li><li>2477: ~0.10%</li><li>2479: ~0.10%</li><li>2480: ~0.20%</li><li>2489: ~0.10%</li><li>2504: ~0.10%</li><li>2505: ~0.10%</li><li>2511: ~0.10%</li><li>2515: ~0.10%</li><li>2519: ~0.10%</li><li>2527: ~0.10%</li><li>2528: ~0.10%</li><li>2530: ~0.10%</li><li>2544: ~0.10%</li><li>2551: ~0.10%</li><li>2553: ~0.10%</li><li>2557: ~0.10%</li><li>2562: ~0.10%</li><li>2564: ~0.10%</li><li>2581: ~0.10%</li><li>2587: ~0.10%</li><li>2589: ~0.10%</li><li>2596: ~0.10%</li><li>2599: ~0.10%</li><li>2600: ~0.10%</li><li>2608: ~0.10%</li><li>2609: ~0.10%</li><li>2614: ~0.20%</li><li>2623: ~0.10%</li><li>2636: ~0.10%</li><li>2638: ~0.10%</li><li>2644: ~0.10%</li><li>2658: ~0.10%</li><li>2659: ~0.10%</li><li>2662: ~0.10%</li><li>2663: ~0.10%</li><li>2673: ~0.10%</li><li>2674: ~0.10%</li><li>2675: ~0.10%</li><li>2679: ~0.10%</li><li>2680: ~0.10%</li><li>2686: ~0.10%</li><li>2699: ~0.10%</li><li>2707: ~0.10%</li><li>2713: ~0.10%</li><li>2716: ~0.10%</li><li>2723: ~0.20%</li><li>2726: ~0.10%</li><li>2729: ~0.10%</li><li>2748: ~0.10%</li><li>2759: ~0.10%</li><li>2760: ~0.10%</li><li>2762: ~0.10%</li><li>2770: ~0.10%</li><li>2771: ~0.10%</li><li>2772: ~0.10%</li><li>2798: ~0.10%</li><li>2799: ~0.10%</li><li>2800: ~0.20%</li><li>2803: ~0.10%</li><li>2816: ~0.10%</li><li>2822: ~0.20%</li><li>2826: ~0.10%</li><li>2829: ~0.10%</li><li>2836: ~0.10%</li><li>2842: ~0.10%</li><li>2844: ~0.10%</li><li>2848: ~0.10%</li><li>2853: ~0.10%</li><li>2854: ~0.10%</li><li>2861: ~0.10%</li><li>2862: ~0.10%</li><li>2890: ~0.10%</li><li>2895: ~0.10%</li><li>2901: ~0.10%</li><li>2904: ~0.10%</li><li>2906: ~0.10%</li><li>2907: ~0.10%</li><li>2908: ~0.10%</li><li>2912: ~0.10%</li><li>2917: ~0.10%</li><li>2922: ~0.10%</li><li>2930: ~0.10%</li><li>2931: ~0.10%</li><li>2932: ~0.10%</li><li>2934: ~0.10%</li><li>2942: ~0.10%</li><li>2943: ~0.10%</li><li>2954: ~0.20%</li><li>2958: ~0.10%</li><li>2963: ~0.10%</li><li>2965: ~0.10%</li><li>2968: ~0.10%</li><li>2969: ~0.10%</li><li>2970: ~0.20%</li><li>2973: ~0.10%</li><li>2977: ~0.10%</li><li>2993: ~0.10%</li><li>2999: ~0.10%</li><li>3005: ~0.10%</li><li>3009: ~0.10%</li><li>3018: ~0.10%</li><li>3019: ~0.10%</li><li>3029: ~0.10%</li><li>3030: ~0.10%</li><li>3038: ~0.10%</li><li>3039: ~0.10%</li><li>3040: ~0.10%</li><li>3052: ~0.10%</li><li>3057: ~0.10%</li><li>3059: ~0.10%</li><li>3060: ~0.20%</li><li>3062: ~0.10%</li><li>3068: ~0.10%</li><li>3072: ~0.10%</li><li>3076: ~0.10%</li><li>3092: ~0.10%</li><li>3100: ~0.10%</li><li>3105: ~0.10%</li><li>3108: ~0.10%</li><li>3120: ~0.10%</li><li>3121: ~0.10%</li><li>3122: ~0.10%</li><li>3124: ~0.10%</li><li>3128: ~0.10%</li><li>3132: ~0.10%</li><li>3138: ~0.10%</li><li>3142: ~0.10%</li><li>3143: ~0.10%</li><li>3157: ~0.10%</li><li>3158: ~0.10%</li><li>3159: ~0.10%</li><li>3160: ~0.10%</li><li>3184: ~0.10%</li><li>3186: ~0.10%</li><li>3194: ~0.10%</li><li>3202: ~0.10%</li><li>3209: ~0.10%</li><li>3213: ~0.10%</li><li>3214: ~0.10%</li><li>3216: ~0.10%</li><li>3219: ~0.10%</li><li>3227: ~0.10%</li><li>3230: ~0.10%</li><li>3231: ~0.10%</li><li>3233: ~0.10%</li><li>3249: ~0.10%</li><li>3250: ~0.10%</li><li>3257: ~0.10%</li><li>3268: ~0.10%</li><li>3269: ~0.10%</li><li>3273: ~0.10%</li><li>3274: ~0.10%</li><li>3281: ~0.10%</li><li>3292: ~0.10%</li><li>3295: ~0.10%</li><li>3297: ~0.10%</li><li>3300: ~0.10%</li><li>3309: ~0.10%</li><li>3315: ~0.10%</li><li>3316: ~0.10%</li><li>3321: ~0.10%</li><li>3339: ~0.30%</li><li>3360: ~0.10%</li><li>3364: ~0.10%</li><li>3365: ~0.10%</li><li>3369: ~0.10%</li><li>3373: ~0.10%</li><li>3375: ~0.10%</li><li>3383: ~0.20%</li><li>3386: ~0.10%</li><li>3388: ~0.10%</li><li>3389: ~0.10%</li><li>3394: ~0.10%</li><li>3398: ~0.10%</li><li>3400: ~0.10%</li><li>3433: ~0.10%</li><li>3439: ~0.10%</li><li>3457: ~0.10%</li><li>3463: ~0.20%</li><li>3464: ~0.10%</li><li>3465: ~0.10%</li><li>3466: ~0.10%</li><li>3478: ~0.10%</li><li>3518: ~0.10%</li><li>3520: ~0.10%</li><li>3524: ~0.10%</li><li>3525: ~0.10%</li><li>3526: ~0.10%</li><li>3533: ~0.10%</li><li>3534: ~0.10%</li><li>3540: ~0.10%</li><li>3569: ~0.10%</li><li>3572: ~0.10%</li><li>3587: ~0.10%</li><li>3592: ~0.20%</li><li>3607: ~0.10%</li><li>3608: ~0.10%</li><li>3613: ~0.10%</li><li>3615: ~0.10%</li><li>3620: ~0.20%</li><li>3621: ~0.10%</li><li>3626: ~0.10%</li><li>3635: ~0.20%</li><li>3637: ~0.10%</li><li>3640: ~0.10%</li><li>3642: ~0.10%</li><li>3644: ~0.10%</li><li>3647: ~0.10%</li><li>3659: ~0.10%</li><li>3666: ~0.10%</li><li>3669: ~0.10%</li><li>3671: ~0.10%</li><li>3685: ~0.10%</li><li>3707: ~0.10%</li><li>3712: ~0.10%</li><li>3722: ~0.10%</li><li>3724: ~0.10%</li><li>3727: ~0.10%</li><li>3728: ~0.10%</li><li>3733: ~0.10%</li><li>3734: ~0.10%</li><li>3741: ~0.10%</li><li>3752: ~0.10%</li><li>3762: ~0.20%</li><li>3766: ~0.10%</li><li>3774: ~0.10%</li><li>3779: ~0.10%</li><li>3781: ~0.10%</li><li>3783: ~0.10%</li><li>3786: ~0.10%</li><li>3793: ~0.10%</li><li>3798: ~0.10%</li><li>3803: ~0.10%</li><li>3804: ~0.10%</li><li>3820: ~0.10%</li><li>3823: ~0.10%</li><li>3828: ~0.10%</li><li>3838: ~0.10%</li><li>3841: ~0.10%</li><li>3851: ~0.10%</li><li>3853: ~0.10%</li><li>3857: ~0.10%</li><li>3866: ~0.20%</li><li>3869: ~0.10%</li><li>3873: ~0.10%</li><li>3874: ~0.10%</li><li>3881: ~0.10%</li><li>3894: ~0.10%</li><li>3901: ~0.10%</li><li>3902: ~0.10%</li><li>3908: ~0.10%</li><li>3912: ~0.10%</li><li>3913: ~0.10%</li><li>3921: ~0.10%</li><li>3922: ~0.10%</li><li>3932: ~0.10%</li><li>3935: ~0.10%</li><li>3949: ~0.10%</li><li>3952: ~0.10%</li><li>3956: ~0.10%</li><li>3963: ~0.20%</li><li>3965: ~0.10%</li><li>3967: ~0.10%</li><li>3984: ~0.10%</li><li>4000: ~0.10%</li><li>4014: ~0.10%</li><li>4015: ~0.10%</li><li>4018: ~0.10%</li><li>4019: ~0.10%</li><li>4023: ~0.10%</li><li>4026: ~0.10%</li><li>4033: ~0.10%</li><li>4034: ~0.10%</li><li>4040: ~0.10%</li><li>4041: ~0.20%</li><li>4072: ~0.10%</li><li>4074: ~0.10%</li><li>4084: ~0.10%</li><li>4086: ~0.10%</li><li>4088: ~0.10%</li><li>4091: ~0.10%</li><li>4092: ~0.10%</li><li>4095: ~0.10%</li><li>4099: ~0.10%</li><li>4108: ~0.10%</li><li>4133: ~0.10%</li><li>4137: ~0.10%</li><li>4145: ~0.20%</li><li>4147: ~0.10%</li><li>4150: ~0.10%</li><li>4151: ~0.10%</li><li>4164: ~0.20%</li><li>4165: ~0.10%</li><li>4166: ~0.10%</li><li>4173: ~0.10%</li><li>4181: ~0.10%</li><li>4193: ~0.10%</li><li>4201: ~0.10%</li><li>4207: ~0.10%</li><li>4208: ~0.10%</li><li>4209: ~0.10%</li><li>4215: ~0.10%</li><li>4233: ~0.20%</li><li>4235: ~0.10%</li><li>4241: ~0.10%</li><li>4242: ~0.10%</li><li>4247: ~0.10%</li><li>4250: ~0.10%</li><li>4259: ~0.10%</li><li>4271: ~0.10%</li><li>4282: ~0.10%</li><li>4290: ~0.10%</li><li>4299: ~0.10%</li><li>4305: ~0.10%</li><li>4306: ~0.10%</li><li>4314: ~0.20%</li><li>4331: ~0.10%</li><li>4341: ~0.10%</li><li>4342: ~0.10%</li><li>4344: ~0.10%</li><li>4348: ~0.10%</li><li>4351: ~0.10%</li><li>4359: ~0.10%</li><li>4379: ~0.10%</li><li>4393: ~0.10%</li><li>4400: ~0.10%</li><li>4411: ~0.10%</li><li>4416: ~0.10%</li><li>4419: ~0.10%</li><li>4425: ~0.10%</li><li>4426: ~0.10%</li><li>4428: ~0.10%</li><li>4431: ~0.10%</li><li>4438: ~0.20%</li><li>4441: ~0.20%</li><li>4442: ~0.10%</li><li>4448: ~0.20%</li><li>4453: ~0.10%</li><li>4455: ~0.10%</li><li>4470: ~0.10%</li><li>4471: ~0.10%</li><li>4482: ~0.10%</li><li>4483: ~0.10%</li><li>4489: ~0.20%</li><li>4490: ~0.10%</li><li>4493: ~0.10%</li><li>4498: ~0.10%</li><li>4501: ~0.10%</li><li>4503: ~0.20%</li><li>4504: ~0.10%</li><li>4523: ~0.20%</li><li>4529: ~0.10%</li><li>4546: ~0.20%</li><li>4549: ~0.10%</li><li>4564: ~0.10%</li><li>4569: ~0.10%</li><li>4572: ~0.10%</li><li>4573: ~0.10%</li><li>4574: ~0.10%</li><li>4575: ~0.10%</li><li>4576: ~0.10%</li><li>4581: ~0.10%</li><li>4582: ~0.10%</li><li>4591: ~0.10%</li><li>4596: ~0.10%</li><li>4602: ~0.10%</li><li>4604: ~0.10%</li><li>4612: ~0.10%</li><li>4613: ~0.20%</li><li>4615: ~0.10%</li><li>4616: ~0.10%</li><li>4617: ~0.10%</li><li>4622: ~0.10%</li><li>4623: ~0.10%</li><li>4625: ~0.10%</li><li>4626: ~0.10%</li><li>4628: ~0.10%</li><li>4633: ~0.10%</li><li>4635: ~0.20%</li><li>4650: ~0.10%</li><li>4651: ~0.10%</li><li>4660: ~0.10%</li><li>4668: ~0.10%</li><li>4680: ~0.10%</li><li>4681: ~0.10%</li><li>4684: ~0.10%</li><li>4686: ~0.10%</li><li>4689: ~0.10%</li><li>4702: ~0.10%</li><li>4703: ~0.10%</li><li>4707: ~0.10%</li><li>4718: ~0.10%</li><li>4719: ~0.10%</li><li>4724: ~0.10%</li><li>4744: ~0.10%</li><li>4753: ~0.10%</li><li>4758: ~0.10%</li><li>4762: ~0.10%</li><li>4766: ~0.10%</li><li>4767: ~0.10%</li><li>4768: ~0.10%</li><li>4778: ~0.10%</li><li>4784: ~0.10%</li><li>4791: ~0.10%</li><li>4792: ~0.10%</li><li>4795: ~0.10%</li><li>4809: ~0.10%</li><li>4814: ~0.10%</li><li>4817: ~0.10%</li><li>4823: ~0.10%</li><li>4828: ~0.10%</li><li>4834: ~0.10%</li><li>4846: ~0.10%</li><li>4849: ~0.10%</li><li>4856: ~0.10%</li><li>4862: ~0.10%</li><li>4864: ~0.10%</li><li>4871: ~0.10%</li><li>4876: ~0.10%</li><li>4880: ~0.10%</li><li>4888: ~0.20%</li><li>4894: ~0.10%</li><li>4899: ~0.10%</li><li>4908: ~0.10%</li><li>4912: ~0.10%</li><li>4917: ~0.10%</li><li>4918: ~0.10%</li><li>4926: ~0.10%</li><li>4931: ~0.10%</li><li>4935: ~0.10%</li><li>4948: ~0.10%</li><li>4960: ~0.10%</li><li>4964: ~0.10%</li><li>4965: ~0.10%</li><li>4968: ~0.10%</li><li>4970: ~0.10%</li><li>4978: ~0.20%</li><li>4982: ~0.10%</li><li>4991: ~0.10%</li><li>4997: ~0.10%</li><li>5004: ~0.10%</li><li>5010: ~0.10%</li><li>5034: ~0.10%</li><li>5038: ~0.10%</li><li>5046: ~0.10%</li><li>5049: ~0.10%</li><li>5052: ~0.10%</li><li>5061: ~0.10%</li><li>5064: ~0.10%</li><li>5077: ~0.10%</li><li>5099: ~0.10%</li><li>5109: ~0.10%</li><li>5114: ~0.10%</li><li>5115: ~0.10%</li><li>5122: ~0.10%</li><li>5126: ~0.10%</li><li>5131: ~0.20%</li><li>5147: ~0.10%</li><li>5152: ~0.10%</li><li>5156: ~0.10%</li><li>5168: ~0.10%</li><li>5187: ~0.10%</li><li>5195: ~0.10%</li><li>5200: ~0.10%</li><li>5202: ~0.30%</li><li>5208: ~0.10%</li><li>5217: ~0.10%</li><li>5220: ~0.10%</li><li>5225: ~0.10%</li><li>5226: ~0.10%</li><li>5229: ~0.10%</li><li>5233: ~0.10%</li><li>5248: ~0.10%</li><li>5257: ~0.10%</li><li>5258: ~0.10%</li><li>5273: ~0.10%</li><li>5277: ~0.10%</li><li>5284: ~0.10%</li><li>5285: ~0.10%</li><li>5289: ~0.10%</li><li>5293: ~0.10%</li><li>5300: ~0.10%</li><li>5310: ~0.20%</li><li>5312: ~0.10%</li><li>5320: ~0.10%</li><li>5321: ~0.10%</li><li>5326: ~0.10%</li><li>5338: ~0.10%</li></ul> |
* Samples:
| text | label |
|:--------------------------------------------------------------------------------------|:------------------|
| <code>Continuous Flow Liquid Analyzer</code> | <code>5077</code> |
| <code>Spicy chili powder mix with salt and garlic.</code> | <code>535</code> |
| <code>Brined and dried fillets of fine fish, suitable for a variety of dishes.</code> | <code>246</code> |
* Loss: [<code>BatchHardTripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchhardtripletloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 10,682 evaluation samples
* Columns: <code>text</code> and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | text | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| type | string | int |
| details | <ul><li>min: 4 tokens</li><li>mean: 15.63 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>4: ~0.10%</li><li>21: ~0.10%</li><li>38: ~0.10%</li><li>41: ~0.10%</li><li>42: ~0.10%</li><li>45: ~0.10%</li><li>64: ~0.10%</li><li>69: ~0.10%</li><li>83: ~0.10%</li><li>87: ~0.10%</li><li>90: ~0.10%</li><li>101: ~0.10%</li><li>105: ~0.10%</li><li>117: ~0.20%</li><li>120: ~0.10%</li><li>121: ~0.10%</li><li>123: ~0.10%</li><li>126: ~0.10%</li><li>127: ~0.10%</li><li>128: ~0.10%</li><li>141: ~0.10%</li><li>147: ~0.10%</li><li>162: ~0.10%</li><li>163: ~0.10%</li><li>171: ~0.10%</li><li>179: ~0.10%</li><li>194: ~0.10%</li><li>203: ~0.10%</li><li>204: ~0.10%</li><li>205: ~0.10%</li><li>207: ~0.10%</li><li>208: ~0.10%</li><li>209: ~0.10%</li><li>215: ~0.10%</li><li>219: ~0.10%</li><li>222: ~0.10%</li><li>226: ~0.10%</li><li>236: ~0.10%</li><li>243: ~0.10%</li><li>256: ~0.10%</li><li>263: ~0.10%</li><li>280: ~0.10%</li><li>281: ~0.10%</li><li>286: ~0.10%</li><li>291: ~0.10%</li><li>293: ~0.10%</li><li>295: ~0.10%</li><li>297: ~0.10%</li><li>302: ~0.10%</li><li>303: ~0.10%</li><li>304: ~0.10%</li><li>308: ~0.10%</li><li>311: ~0.10%</li><li>320: ~0.10%</li><li>322: ~0.10%</li><li>328: ~0.10%</li><li>342: ~0.10%</li><li>343: ~0.10%</li><li>349: ~0.10%</li><li>355: ~0.10%</li><li>361: ~0.20%</li><li>363: ~0.10%</li><li>364: ~0.10%</li><li>365: ~0.10%</li><li>370: ~0.10%</li><li>371: ~0.10%</li><li>383: ~0.10%</li><li>385: ~0.10%</li><li>386: ~0.10%</li><li>398: ~0.10%</li><li>406: ~0.10%</li><li>427: ~0.10%</li><li>430: ~0.10%</li><li>434: ~0.10%</li><li>450: ~0.10%</li><li>451: ~0.10%</li><li>453: ~0.10%</li><li>457: ~0.10%</li><li>462: ~0.10%</li><li>467: ~0.10%</li><li>476: ~0.10%</li><li>479: ~0.10%</li><li>480: ~0.10%</li><li>484: ~0.10%</li><li>490: ~0.10%</li><li>492: ~0.10%</li><li>497: ~0.10%</li><li>510: ~0.10%</li><li>512: ~0.10%</li><li>516: ~0.20%</li><li>518: ~0.10%</li><li>519: ~0.10%</li><li>525: ~0.10%</li><li>528: ~0.10%</li><li>531: ~0.10%</li><li>534: ~0.10%</li><li>552: ~0.10%</li><li>555: ~0.10%</li><li>558: ~0.10%</li><li>563: ~0.10%</li><li>566: ~0.10%</li><li>567: ~0.10%</li><li>574: ~0.10%</li><li>582: ~0.10%</li><li>587: ~0.10%</li><li>598: ~0.10%</li><li>604: ~0.10%</li><li>606: ~0.10%</li><li>614: ~0.10%</li><li>621: ~0.10%</li><li>622: ~0.10%</li><li>630: ~0.10%</li><li>640: ~0.10%</li><li>645: ~0.10%</li><li>668: ~0.10%</li><li>670: ~0.10%</li><li>677: ~0.10%</li><li>682: ~0.10%</li><li>683: ~0.10%</li><li>688: ~0.10%</li><li>720: ~0.10%</li><li>734: ~0.10%</li><li>744: ~0.10%</li><li>753: ~0.10%</li><li>756: ~0.10%</li><li>759: ~0.10%</li><li>763: ~0.10%</li><li>767: ~0.10%</li><li>770: ~0.10%</li><li>773: ~0.20%</li><li>775: ~0.10%</li><li>778: ~0.10%</li><li>779: ~0.10%</li><li>783: ~0.10%</li><li>784: ~0.10%</li><li>785: ~0.10%</li><li>787: ~0.10%</li><li>788: ~0.10%</li><li>789: ~0.10%</li><li>801: ~0.10%</li><li>802: ~0.10%</li><li>811: ~0.10%</li><li>812: ~0.10%</li><li>827: ~0.10%</li><li>828: ~0.10%</li><li>836: ~0.10%</li><li>843: ~0.10%</li><li>844: ~0.10%</li><li>846: ~0.10%</li><li>848: ~0.10%</li><li>851: ~0.10%</li><li>872: ~0.20%</li><li>875: ~0.10%</li><li>878: ~0.10%</li><li>879: ~0.10%</li><li>881: ~0.10%</li><li>890: ~0.10%</li><li>894: ~0.10%</li><li>895: ~0.10%</li><li>897: ~0.10%</li><li>899: ~0.10%</li><li>900: ~0.10%</li><li>903: ~0.10%</li><li>927: ~0.10%</li><li>929: ~0.10%</li><li>940: ~0.10%</li><li>945: ~0.10%</li><li>953: ~0.10%</li><li>957: ~0.10%</li><li>963: ~0.10%</li><li>968: ~0.10%</li><li>969: ~0.10%</li><li>975: ~0.10%</li><li>976: ~0.10%</li><li>983: ~0.10%</li><li>991: ~0.10%</li><li>993: ~0.20%</li><li>994: ~0.10%</li><li>997: ~0.10%</li><li>1001: ~0.10%</li><li>1036: ~0.10%</li><li>1042: ~0.10%</li><li>1074: ~0.10%</li><li>1078: ~0.10%</li><li>1080: ~0.10%</li><li>1085: ~0.10%</li><li>1092: ~0.10%</li><li>1097: ~0.10%</li><li>1104: ~0.10%</li><li>1107: ~0.10%</li><li>1119: ~0.10%</li><li>1124: ~0.10%</li><li>1139: ~0.10%</li><li>1142: ~0.10%</li><li>1181: ~0.10%</li><li>1186: ~0.10%</li><li>1201: ~0.10%</li><li>1206: ~0.10%</li><li>1211: ~0.10%</li><li>1213: ~0.10%</li><li>1225: ~0.10%</li><li>1232: ~0.10%</li><li>1236: ~0.10%</li><li>1237: ~0.10%</li><li>1238: ~0.10%</li><li>1240: ~0.10%</li><li>1256: ~0.10%</li><li>1259: ~0.10%</li><li>1266: ~0.10%</li><li>1273: ~0.10%</li><li>1277: ~0.10%</li><li>1279: ~0.10%</li><li>1280: ~0.10%</li><li>1281: ~0.10%</li><li>1284: ~0.10%</li><li>1286: ~0.10%</li><li>1292: ~0.10%</li><li>1308: ~0.10%</li><li>1321: ~0.10%</li><li>1340: ~0.10%</li><li>1346: ~0.10%</li><li>1348: ~0.10%</li><li>1355: ~0.10%</li><li>1369: ~0.20%</li><li>1370: ~0.10%</li><li>1371: ~0.20%</li><li>1390: ~0.10%</li><li>1391: ~0.10%</li><li>1394: ~0.10%</li><li>1396: ~0.10%</li><li>1403: ~0.10%</li><li>1409: ~0.10%</li><li>1410: ~0.10%</li><li>1412: ~0.10%</li><li>1414: ~0.20%</li><li>1416: ~0.10%</li><li>1420: ~0.10%</li><li>1423: ~0.10%</li><li>1427: ~0.10%</li><li>1430: ~0.10%</li><li>1431: ~0.10%</li><li>1432: ~0.10%</li><li>1437: ~0.10%</li><li>1462: ~0.10%</li><li>1467: ~0.10%</li><li>1472: ~0.10%</li><li>1474: ~0.10%</li><li>1475: ~0.10%</li><li>1479: ~0.10%</li><li>1494: ~0.10%</li><li>1497: ~0.10%</li><li>1503: ~0.10%</li><li>1505: ~0.10%</li><li>1506: ~0.10%</li><li>1507: ~0.10%</li><li>1508: ~0.10%</li><li>1518: ~0.10%</li><li>1523: ~0.10%</li><li>1529: ~0.10%</li><li>1533: ~0.10%</li><li>1543: ~0.10%</li><li>1552: ~0.10%</li><li>1558: ~0.10%</li><li>1569: ~0.10%</li><li>1576: ~0.10%</li><li>1577: ~0.10%</li><li>1578: ~0.10%</li><li>1583: ~0.10%</li><li>1587: ~0.10%</li><li>1589: ~0.10%</li><li>1590: ~0.10%</li><li>1598: ~0.10%</li><li>1611: ~0.10%</li><li>1613: ~0.10%</li><li>1614: ~0.10%</li><li>1621: ~0.10%</li><li>1629: ~0.10%</li><li>1630: ~0.10%</li><li>1631: ~0.10%</li><li>1633: ~0.10%</li><li>1634: ~0.10%</li><li>1646: ~0.10%</li><li>1647: ~0.10%</li><li>1653: ~0.10%</li><li>1670: ~0.10%</li><li>1675: ~0.10%</li><li>1685: ~0.10%</li><li>1697: ~0.10%</li><li>1708: ~0.10%</li><li>1731: ~0.10%</li><li>1740: ~0.10%</li><li>1743: ~0.10%</li><li>1750: ~0.10%</li><li>1752: ~0.10%</li><li>1755: ~0.10%</li><li>1762: ~0.10%</li><li>1768: ~0.10%</li><li>1773: ~0.10%</li><li>1792: ~0.10%</li><li>1795: ~0.10%</li><li>1798: ~0.10%</li><li>1801: ~0.10%</li><li>1807: ~0.10%</li><li>1817: ~0.10%</li><li>1824: ~0.10%</li><li>1825: ~0.20%</li><li>1828: ~0.10%</li><li>1833: ~0.10%</li><li>1852: ~0.10%</li><li>1854: ~0.10%</li><li>1855: ~0.10%</li><li>1856: ~0.10%</li><li>1859: ~0.10%</li><li>1867: ~0.10%</li><li>1869: ~0.10%</li><li>1874: ~0.10%</li><li>1886: ~0.10%</li><li>1889: ~0.10%</li><li>1892: ~0.10%</li><li>1893: ~0.10%</li><li>1894: ~0.10%</li><li>1901: ~0.10%</li><li>1910: ~0.10%</li><li>1911: ~0.10%</li><li>1921: ~0.10%</li><li>1922: ~0.10%</li><li>1925: ~0.10%</li><li>1933: ~0.20%</li><li>1934: ~0.10%</li><li>1937: ~0.10%</li><li>1940: ~0.20%</li><li>1941: ~0.10%</li><li>1954: ~0.10%</li><li>1969: ~0.10%</li><li>1974: ~0.20%</li><li>1986: ~0.10%</li><li>1988: ~0.10%</li><li>1989: ~0.10%</li><li>1993: ~0.10%</li><li>1994: ~0.10%</li><li>1996: ~0.10%</li><li>2013: ~0.10%</li><li>2043: ~0.10%</li><li>2045: ~0.10%</li><li>2054: ~0.10%</li><li>2070: ~0.10%</li><li>2074: ~0.10%</li><li>2076: ~0.10%</li><li>2080: ~0.10%</li><li>2083: ~0.10%</li><li>2084: ~0.10%</li><li>2087: ~0.10%</li><li>2095: ~0.10%</li><li>2116: ~0.10%</li><li>2124: ~0.10%</li><li>2125: ~0.10%</li><li>2126: ~0.10%</li><li>2129: ~0.10%</li><li>2131: ~0.10%</li><li>2133: ~0.10%</li><li>2137: ~0.10%</li><li>2154: ~0.20%</li><li>2157: ~0.10%</li><li>2160: ~0.10%</li><li>2162: ~0.10%</li><li>2168: ~0.10%</li><li>2177: ~0.10%</li><li>2181: ~0.10%</li><li>2184: ~0.10%</li><li>2193: ~0.10%</li><li>2194: ~0.10%</li><li>2195: ~0.10%</li><li>2196: ~0.10%</li><li>2200: ~0.10%</li><li>2201: ~0.10%</li><li>2202: ~0.10%</li><li>2203: ~0.10%</li><li>2215: ~0.10%</li><li>2217: ~0.10%</li><li>2226: ~0.10%</li><li>2228: ~0.10%</li><li>2231: ~0.10%</li><li>2232: ~0.10%</li><li>2235: ~0.10%</li><li>2236: ~0.10%</li><li>2238: ~0.10%</li><li>2242: ~0.10%</li><li>2243: ~0.10%</li><li>2244: ~0.10%</li><li>2246: ~0.10%</li><li>2247: ~0.10%</li><li>2250: ~0.10%</li><li>2253: ~0.10%</li><li>2261: ~0.10%</li><li>2263: ~0.10%</li><li>2272: ~0.10%</li><li>2275: ~0.10%</li><li>2284: ~0.10%</li><li>2292: ~0.10%</li><li>2297: ~0.10%</li><li>2298: ~0.10%</li><li>2299: ~0.10%</li><li>2300: ~0.10%</li><li>2315: ~0.10%</li><li>2316: ~0.10%</li><li>2326: ~0.10%</li><li>2337: ~0.10%</li><li>2341: ~0.10%</li><li>2346: ~0.10%</li><li>2350: ~0.10%</li><li>2353: ~0.10%</li><li>2354: ~0.10%</li><li>2356: ~0.20%</li><li>2357: ~0.10%</li><li>2359: ~0.10%</li><li>2368: ~0.10%</li><li>2377: ~0.10%</li><li>2384: ~0.10%</li><li>2389: ~0.10%</li><li>2390: ~0.10%</li><li>2391: ~0.10%</li><li>2400: ~0.10%</li><li>2408: ~0.10%</li><li>2409: ~0.10%</li><li>2412: ~0.10%</li><li>2420: ~0.10%</li><li>2423: ~0.10%</li><li>2439: ~0.10%</li><li>2442: ~0.10%</li><li>2444: ~0.10%</li><li>2453: ~0.10%</li><li>2454: ~0.10%</li><li>2458: ~0.10%</li><li>2464: ~0.10%</li><li>2475: ~0.10%</li><li>2477: ~0.10%</li><li>2481: ~0.10%</li><li>2482: ~0.10%</li><li>2491: ~0.10%</li><li>2498: ~0.10%</li><li>2500: ~0.20%</li><li>2508: ~0.10%</li><li>2526: ~0.20%</li><li>2529: ~0.10%</li><li>2538: ~0.10%</li><li>2540: ~0.10%</li><li>2544: ~0.20%</li><li>2546: ~0.10%</li><li>2554: ~0.10%</li><li>2555: ~0.10%</li><li>2557: ~0.10%</li><li>2561: ~0.10%</li><li>2562: ~0.10%</li><li>2565: ~0.10%</li><li>2568: ~0.10%</li><li>2572: ~0.10%</li><li>2582: ~0.10%</li><li>2583: ~0.10%</li><li>2586: ~0.10%</li><li>2587: ~0.10%</li><li>2588: ~0.10%</li><li>2597: ~0.10%</li><li>2598: ~0.10%</li><li>2605: ~0.10%</li><li>2616: ~0.10%</li><li>2618: ~0.10%</li><li>2622: ~0.10%</li><li>2637: ~0.10%</li><li>2639: ~0.10%</li><li>2646: ~0.10%</li><li>2647: ~0.10%</li><li>2648: ~0.10%</li><li>2649: ~0.10%</li><li>2654: ~0.20%</li><li>2664: ~0.10%</li><li>2667: ~0.10%</li><li>2668: ~0.10%</li><li>2676: ~0.10%</li><li>2678: ~0.10%</li><li>2679: ~0.10%</li><li>2681: ~0.10%</li><li>2682: ~0.10%</li><li>2688: ~0.10%</li><li>2695: ~0.10%</li><li>2698: ~0.10%</li><li>2708: ~0.10%</li><li>2713: ~0.10%</li><li>2715: ~0.10%</li><li>2718: ~0.10%</li><li>2721: ~0.20%</li><li>2722: ~0.10%</li><li>2730: ~0.10%</li><li>2734: ~0.10%</li><li>2736: ~0.10%</li><li>2737: ~0.10%</li><li>2745: ~0.10%</li><li>2766: ~0.20%</li><li>2788: ~0.10%</li><li>2793: ~0.10%</li><li>2794: ~0.10%</li><li>2796: ~0.10%</li><li>2799: ~0.10%</li><li>2801: ~0.10%</li><li>2813: ~0.10%</li><li>2819: ~0.10%</li><li>2823: ~0.10%</li><li>2827: ~0.10%</li><li>2831: ~0.10%</li><li>2838: ~0.10%</li><li>2846: ~0.10%</li><li>2848: ~0.10%</li><li>2850: ~0.10%</li><li>2852: ~0.10%</li><li>2853: ~0.10%</li><li>2854: ~0.10%</li><li>2861: ~0.10%</li><li>2868: ~0.10%</li><li>2869: ~0.10%</li><li>2872: ~0.10%</li><li>2875: ~0.10%</li><li>2878: ~0.10%</li><li>2880: ~0.10%</li><li>2889: ~0.10%</li><li>2902: ~0.10%</li><li>2904: ~0.10%</li><li>2906: ~0.10%</li><li>2910: ~0.10%</li><li>2914: ~0.10%</li><li>2917: ~0.10%</li><li>2923: ~0.10%</li><li>2930: ~0.10%</li><li>2945: ~0.10%</li><li>2950: ~0.10%</li><li>2951: ~0.10%</li><li>2954: ~0.10%</li><li>2963: ~0.10%</li><li>2968: ~0.10%</li><li>2971: ~0.10%</li><li>2973: ~0.10%</li><li>2974: ~0.10%</li><li>3002: ~0.10%</li><li>3013: ~0.10%</li><li>3014: ~0.10%</li><li>3018: ~0.10%</li><li>3021: ~0.10%</li><li>3027: ~0.10%</li><li>3037: ~0.10%</li><li>3042: ~0.10%</li><li>3048: ~0.10%</li><li>3051: ~0.10%</li><li>3053: ~0.10%</li><li>3060: ~0.10%</li><li>3061: ~0.10%</li><li>3075: ~0.20%</li><li>3080: ~0.10%</li><li>3085: ~0.10%</li><li>3090: ~0.10%</li><li>3092: ~0.10%</li><li>3095: ~0.10%</li><li>3098: ~0.10%</li><li>3106: ~0.10%</li><li>3120: ~0.10%</li><li>3121: ~0.10%</li><li>3123: ~0.10%</li><li>3133: ~0.10%</li><li>3142: ~0.10%</li><li>3144: ~0.20%</li><li>3154: ~0.10%</li><li>3162: ~0.10%</li><li>3163: ~0.10%</li><li>3171: ~0.10%</li><li>3175: ~0.10%</li><li>3178: ~0.10%</li><li>3189: ~0.10%</li><li>3191: ~0.10%</li><li>3196: ~0.10%</li><li>3219: ~0.10%</li><li>3221: ~0.10%</li><li>3228: ~0.10%</li><li>3229: ~0.10%</li><li>3230: ~0.10%</li><li>3233: ~0.10%</li><li>3246: ~0.10%</li><li>3255: ~0.10%</li><li>3257: ~0.10%</li><li>3258: ~0.10%</li><li>3260: ~0.10%</li><li>3261: ~0.10%</li><li>3263: ~0.10%</li><li>3264: ~0.10%</li><li>3267: ~0.10%</li><li>3271: ~0.10%</li><li>3272: ~0.10%</li><li>3285: ~0.10%</li><li>3289: ~0.10%</li><li>3292: ~0.10%</li><li>3306: ~0.10%</li><li>3308: ~0.10%</li><li>3311: ~0.10%</li><li>3343: ~0.10%</li><li>3358: ~0.10%</li><li>3359: ~0.10%</li><li>3376: ~0.10%</li><li>3378: ~0.10%</li><li>3380: ~0.10%</li><li>3381: ~0.10%</li><li>3395: ~0.10%</li><li>3396: ~0.10%</li><li>3397: ~0.10%</li><li>3398: ~0.10%</li><li>3402: ~0.10%</li><li>3406: ~0.10%</li><li>3407: ~0.10%</li><li>3410: ~0.10%</li><li>3424: ~0.10%</li><li>3425: ~0.10%</li><li>3426: ~0.10%</li><li>3427: ~0.10%</li><li>3434: ~0.10%</li><li>3437: ~0.10%</li><li>3442: ~0.10%</li><li>3444: ~0.10%</li><li>3450: ~0.20%</li><li>3458: ~0.10%</li><li>3460: ~0.10%</li><li>3471: ~0.10%</li><li>3480: ~0.10%</li><li>3486: ~0.10%</li><li>3495: ~0.10%</li><li>3505: ~0.10%</li><li>3519: ~0.10%</li><li>3522: ~0.10%</li><li>3523: ~0.10%</li><li>3530: ~0.10%</li><li>3547: ~0.10%</li><li>3548: ~0.10%</li><li>3549: ~0.10%</li><li>3551: ~0.10%</li><li>3560: ~0.10%</li><li>3566: ~0.10%</li><li>3567: ~0.10%</li><li>3571: ~0.10%</li><li>3573: ~0.10%</li><li>3578: ~0.10%</li><li>3585: ~0.10%</li><li>3589: ~0.10%</li><li>3590: ~0.10%</li><li>3594: ~0.10%</li><li>3596: ~0.10%</li><li>3598: ~0.10%</li><li>3605: ~0.10%</li><li>3607: ~0.10%</li><li>3608: ~0.10%</li><li>3619: ~0.10%</li><li>3624: ~0.10%</li><li>3630: ~0.10%</li><li>3632: ~0.20%</li><li>3633: ~0.10%</li><li>3634: ~0.10%</li><li>3637: ~0.10%</li><li>3653: ~0.10%</li><li>3654: ~0.10%</li><li>3656: ~0.10%</li><li>3657: ~0.10%</li><li>3661: ~0.10%</li><li>3668: ~0.10%</li><li>3679: ~0.10%</li><li>3681: ~0.10%</li><li>3687: ~0.10%</li><li>3690: ~0.10%</li><li>3693: ~0.10%</li><li>3695: ~0.10%</li><li>3717: ~0.10%</li><li>3719: ~0.10%</li><li>3725: ~0.10%</li><li>3727: ~0.20%</li><li>3729: ~0.10%</li><li>3735: ~0.10%</li><li>3737: ~0.10%</li><li>3744: ~0.10%</li><li>3745: ~0.10%</li><li>3747: ~0.10%</li><li>3750: ~0.10%</li><li>3756: ~0.10%</li><li>3758: ~0.10%</li><li>3768: ~0.10%</li><li>3773: ~0.10%</li><li>3774: ~0.10%</li><li>3786: ~0.10%</li><li>3793: ~0.20%</li><li>3794: ~0.10%</li><li>3799: ~0.10%</li><li>3800: ~0.10%</li><li>3806: ~0.10%</li><li>3809: ~0.10%</li><li>3814: ~0.20%</li><li>3817: ~0.20%</li><li>3822: ~0.10%</li><li>3826: ~0.10%</li><li>3827: ~0.10%</li><li>3830: ~0.10%</li><li>3835: ~0.10%</li><li>3837: ~0.10%</li><li>3838: ~0.10%</li><li>3839: ~0.10%</li><li>3848: ~0.10%</li><li>3852: ~0.10%</li><li>3853: ~0.10%</li><li>3854: ~0.10%</li><li>3855: ~0.10%</li><li>3860: ~0.10%</li><li>3868: ~0.10%</li><li>3886: ~0.10%</li><li>3887: ~0.10%</li><li>3888: ~0.10%</li><li>3901: ~0.10%</li><li>3904: ~0.10%</li><li>3905: ~0.10%</li><li>3907: ~0.10%</li><li>3916: ~0.10%</li><li>3918: ~0.10%</li><li>3923: ~0.10%</li><li>3928: ~0.10%</li><li>3930: ~0.10%</li><li>3932: ~0.10%</li><li>3934: ~0.10%</li><li>3938: ~0.10%</li><li>3939: ~0.10%</li><li>3943: ~0.10%</li><li>3952: ~0.10%</li><li>3956: ~0.10%</li><li>3961: ~0.10%</li><li>3969: ~0.10%</li><li>3972: ~0.20%</li><li>3977: ~0.10%</li><li>3981: ~0.10%</li><li>3986: ~0.10%</li><li>3998: ~0.10%</li><li>4006: ~0.10%</li><li>4011: ~0.10%</li><li>4024: ~0.10%</li><li>4030: ~0.10%</li><li>4031: ~0.10%</li><li>4032: ~0.10%</li><li>4042: ~0.10%</li><li>4046: ~0.10%</li><li>4062: ~0.20%</li><li>4070: ~0.10%</li><li>4071: ~0.10%</li><li>4074: ~0.10%</li><li>4077: ~0.10%</li><li>4079: ~0.10%</li><li>4080: ~0.10%</li><li>4085: ~0.10%</li><li>4098: ~0.10%</li><li>4109: ~0.10%</li><li>4119: ~0.10%</li><li>4120: ~0.10%</li><li>4124: ~0.10%</li><li>4130: ~0.10%</li><li>4135: ~0.10%</li><li>4139: ~0.10%</li><li>4140: ~0.10%</li><li>4150: ~0.10%</li><li>4158: ~0.10%</li><li>4164: ~0.10%</li><li>4167: ~0.10%</li><li>4169: ~0.10%</li><li>4174: ~0.10%</li><li>4183: ~0.10%</li><li>4184: ~0.10%</li><li>4185: ~0.10%</li><li>4197: ~0.10%</li><li>4202: ~0.10%</li><li>4204: ~0.10%</li><li>4212: ~0.10%</li><li>4220: ~0.10%</li><li>4228: ~0.10%</li><li>4230: ~0.10%</li><li>4234: ~0.10%</li><li>4235: ~0.10%</li><li>4236: ~0.10%</li><li>4238: ~0.10%</li><li>4243: ~0.10%</li><li>4256: ~0.10%</li><li>4257: ~0.10%</li><li>4261: ~0.10%</li><li>4278: ~0.10%</li><li>4282: ~0.10%</li><li>4283: ~0.10%</li><li>4284: ~0.10%</li><li>4290: ~0.10%</li><li>4295: ~0.10%</li><li>4318: ~0.10%</li><li>4323: ~0.10%</li><li>4326: ~0.10%</li><li>4328: ~0.10%</li><li>4341: ~0.10%</li><li>4353: ~0.10%</li><li>4357: ~0.20%</li><li>4371: ~0.20%</li><li>4385: ~0.10%</li><li>4390: ~0.10%</li><li>4395: ~0.10%</li><li>4400: ~0.10%</li><li>4403: ~0.10%</li><li>4409: ~0.10%</li><li>4411: ~0.10%</li><li>4413: ~0.10%</li><li>4423: ~0.10%</li><li>4425: ~0.10%</li><li>4426: ~0.10%</li><li>4431: ~0.10%</li><li>4432: ~0.10%</li><li>4439: ~0.10%</li><li>4444: ~0.10%</li><li>4455: ~0.10%</li><li>4461: ~0.10%</li><li>4468: ~0.10%</li><li>4471: ~0.10%</li><li>4475: ~0.10%</li><li>4493: ~0.10%</li><li>4504: ~0.10%</li><li>4505: ~0.20%</li><li>4513: ~0.10%</li><li>4515: ~0.10%</li><li>4532: ~0.10%</li><li>4552: ~0.10%</li><li>4553: ~0.10%</li><li>4555: ~0.10%</li><li>4564: ~0.10%</li><li>4567: ~0.10%</li><li>4578: ~0.10%</li><li>4580: ~0.10%</li><li>4600: ~0.10%</li><li>4604: ~0.10%</li><li>4616: ~0.10%</li><li>4619: ~0.10%</li><li>4637: ~0.10%</li><li>4638: ~0.10%</li><li>4648: ~0.10%</li><li>4653: ~0.10%</li><li>4665: ~0.10%</li><li>4670: ~0.10%</li><li>4681: ~0.10%</li><li>4685: ~0.20%</li><li>4690: ~0.10%</li><li>4695: ~0.10%</li><li>4696: ~0.10%</li><li>4697: ~0.10%</li><li>4698: ~0.20%</li><li>4699: ~0.10%</li><li>4706: ~0.10%</li><li>4712: ~0.10%</li><li>4720: ~0.10%</li><li>4724: ~0.10%</li><li>4731: ~0.10%</li><li>4733: ~0.10%</li><li>4739: ~0.10%</li><li>4740: ~0.10%</li><li>4747: ~0.10%</li><li>4748: ~0.10%</li><li>4755: ~0.10%</li><li>4766: ~0.10%</li><li>4780: ~0.10%</li><li>4789: ~0.10%</li><li>4792: ~0.10%</li><li>4808: ~0.10%</li><li>4815: ~0.10%</li><li>4817: ~0.10%</li><li>4822: ~0.10%</li><li>4825: ~0.10%</li><li>4838: ~0.10%</li><li>4839: ~0.10%</li><li>4843: ~0.10%</li><li>4858: ~0.10%</li><li>4862: ~0.10%</li><li>4867: ~0.10%</li><li>4869: ~0.10%</li><li>4871: ~0.10%</li><li>4872: ~0.10%</li><li>4877: ~0.10%</li><li>4886: ~0.10%</li><li>4894: ~0.10%</li><li>4898: ~0.10%</li><li>4900: ~0.10%</li><li>4903: ~0.10%</li><li>4904: ~0.10%</li><li>4907: ~0.10%</li><li>4923: ~0.10%</li><li>4930: ~0.10%</li><li>4931: ~0.10%</li><li>4933: ~0.10%</li><li>4940: ~0.10%</li><li>4942: ~0.10%</li><li>4944: ~0.10%</li><li>4946: ~0.10%</li><li>4952: ~0.10%</li><li>4954: ~0.10%</li><li>4957: ~0.10%</li><li>4969: ~0.10%</li><li>4979: ~0.10%</li><li>4982: ~0.10%</li><li>4986: ~0.10%</li><li>5002: ~0.10%</li><li>5005: ~0.10%</li><li>5014: ~0.10%</li><li>5018: ~0.10%</li><li>5020: ~0.10%</li><li>5034: ~0.10%</li><li>5040: ~0.20%</li><li>5043: ~0.10%</li><li>5046: ~0.20%</li><li>5048: ~0.10%</li><li>5051: ~0.10%</li><li>5058: ~0.10%</li><li>5064: ~0.10%</li><li>5066: ~0.10%</li><li>5072: ~0.10%</li><li>5075: ~0.10%</li><li>5076: ~0.10%</li><li>5082: ~0.10%</li><li>5087: ~0.10%</li><li>5101: ~0.10%</li><li>5102: ~0.10%</li><li>5106: ~0.10%</li><li>5110: ~0.10%</li><li>5124: ~0.10%</li><li>5133: ~0.10%</li><li>5135: ~0.20%</li><li>5139: ~0.10%</li><li>5140: ~0.10%</li><li>5170: ~0.20%</li><li>5175: ~0.10%</li><li>5196: ~0.10%</li><li>5204: ~0.10%</li><li>5214: ~0.10%</li><li>5222: ~0.10%</li><li>5229: ~0.10%</li><li>5243: ~0.10%</li><li>5261: ~0.10%</li><li>5265: ~0.10%</li><li>5273: ~0.10%</li><li>5279: ~0.10%</li><li>5280: ~0.10%</li><li>5283: ~0.10%</li><li>5285: ~0.10%</li><li>5286: ~0.10%</li><li>5287: ~0.10%</li><li>5288: ~0.10%</li><li>5311: ~0.10%</li><li>5314: ~0.10%</li><li>5315: ~0.10%</li><li>5318: ~0.10%</li><li>5322: ~0.10%</li><li>5323: ~0.10%</li><li>5329: ~0.10%</li><li>5331: ~0.10%</li><li>5333: ~0.10%</li><li>5334: ~0.10%</li></ul> |
* Samples:
| text | label |
|:----------------------------------------------------------------------------------------------------------|:------------------|
| <code>Integrated radio receiver with sound recording capabilities, mobile battery-operated system.</code> | <code>4681</code> |
| <code>Premium frozen turkey, 7-9 kg, antibiotic-free, processed within 24 hours.</code> | <code>69</code> |
| <code>Glittering colored granules, under 7mm, used in cosmetics and decorative arts.</code> | <code>3263</code> |
* Loss: [<code>BatchHardTripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#batchhardtripletloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `bf16`: True
- `optim`: adamw_torch_fused
- `hub_private_repo`: True
- `batch_sampler`: group_by_label
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: True
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: group_by_label
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | eval_cosine_accuracy |
|:------:|:----:|:-------------:|:---------------:|:--------------------:|
| 0 | 0 | - | - | 0.8447 |
| 0.0037 | 10 | 4.0341 | - | - |
| 0.0075 | 20 | 3.8755 | - | - |
| 0.0112 | 30 | 3.9669 | - | - |
| 0.0150 | 40 | 4.6424 | - | - |
| 0.0187 | 50 | 3.4661 | - | - |
| 0.0225 | 60 | 3.7177 | - | - |
| 0.0262 | 70 | 4.5446 | - | - |
| 0.0300 | 80 | 3.2709 | - | - |
| 0.0337 | 90 | 4.648 | - | - |
| 0.0375 | 100 | 2.4594 | - | - |
| 0.0412 | 110 | 2.4992 | - | - |
| 0.0449 | 120 | 2.381 | - | - |
| 0.0487 | 130 | 2.9117 | - | - |
| 0.0524 | 140 | 2.0562 | - | - |
| 0.0562 | 150 | 2.9831 | - | - |
| 0.0599 | 160 | 3.3428 | - | - |
| 0.0637 | 170 | 3.3217 | - | - |
| 0.0674 | 180 | 3.5566 | - | - |
| 0.0712 | 190 | 3.0018 | - | - |
| 0.0749 | 200 | 2.4643 | - | - |
| 0.0787 | 210 | 2.1375 | - | - |
| 0.0824 | 220 | 2.7643 | - | - |
| 0.0861 | 230 | 2.3066 | - | - |
| 0.0899 | 240 | 2.0659 | - | - |
| 0.0936 | 250 | 1.6675 | - | - |
| 0.0974 | 260 | 2.516 | - | - |
| 0.1011 | 270 | 1.4495 | - | - |
| 0.1049 | 280 | 3.1037 | - | - |
| 0.1086 | 290 | 2.9175 | - | - |
| 0.1124 | 300 | 2.8179 | - | - |
| 0.1161 | 310 | 1.8993 | - | - |
| 0.1199 | 320 | 2.7167 | - | - |
| 0.1236 | 330 | 2.0482 | - | - |
| 0.1273 | 340 | 3.0799 | - | - |
| 0.1311 | 350 | 3.0152 | - | - |
| 0.1348 | 360 | 2.4402 | - | - |
| 0.1386 | 370 | 1.7145 | - | - |
| 0.1423 | 380 | 1.5029 | - | - |
| 0.1461 | 390 | 2.3034 | - | - |
| 0.1498 | 400 | 2.0296 | - | - |
| 0.1536 | 410 | 2.3206 | - | - |
| 0.1573 | 420 | 2.3162 | - | - |
| 0.1610 | 430 | 1.3744 | - | - |
| 0.1648 | 440 | 2.9439 | - | - |
| 0.1685 | 450 | 2.5834 | - | - |
| 0.1723 | 460 | 2.2302 | - | - |
| 0.1760 | 470 | 1.5165 | - | - |
| 0.1798 | 480 | 2.4553 | - | - |
| 0.1835 | 490 | 1.6331 | - | - |
| 0.1873 | 500 | 1.7502 | - | - |
| 0.1910 | 510 | 2.3556 | - | - |
| 0.1948 | 520 | 2.6268 | - | - |
| 0.1985 | 530 | 2.3735 | - | - |
| 0.2022 | 540 | 2.9494 | - | - |
| 0.2060 | 550 | 1.7133 | - | - |
| 0.2097 | 560 | 1.5455 | - | - |
| 0.2135 | 570 | 1.8857 | - | - |
| 0.2172 | 580 | 1.7242 | - | - |
| 0.2210 | 590 | 0.9303 | - | - |
| 0.2247 | 600 | 1.2073 | - | - |
| 0.2285 | 610 | 1.9799 | - | - |
| 0.2322 | 620 | 0.5134 | - | - |
| 0.2360 | 630 | 1.7473 | - | - |
| 0.2397 | 640 | 2.5535 | - | - |
| 0.2434 | 650 | 2.2415 | - | - |
| 0.2472 | 660 | 2.3361 | - | - |
| 0.2509 | 670 | 2.1372 | - | - |
| 0.2547 | 680 | 1.8236 | - | - |
| 0.2584 | 690 | 1.7999 | - | - |
| 0.2622 | 700 | 1.041 | - | - |
| 0.2659 | 710 | 1.5633 | - | - |
| 0.2697 | 720 | 1.475 | - | - |
| 0.2734 | 730 | 2.6768 | - | - |
| 0.2772 | 740 | 2.0162 | - | - |
| 0.2809 | 750 | 2.8179 | - | - |
| 0.2846 | 760 | 2.2107 | - | - |
| 0.2884 | 770 | 1.4401 | - | - |
| 0.2921 | 780 | 1.3463 | - | - |
| 0.2959 | 790 | 1.4704 | - | - |
| 0.2996 | 800 | 2.1911 | - | - |
| 0.3034 | 810 | 1.4399 | - | - |
| 0.3071 | 820 | 1.6818 | - | - |
| 0.3109 | 830 | 1.3086 | - | - |
| 0.3146 | 840 | 3.0084 | - | - |
| 0.3184 | 850 | 1.5507 | - | - |
| 0.3221 | 860 | 1.2379 | - | - |
| 0.3258 | 870 | 1.6205 | - | - |
| 0.3296 | 880 | 1.7312 | - | - |
| 0.3333 | 890 | 1.2205 | - | - |
| 0.3371 | 900 | 2.0977 | - | - |
| 0.3408 | 910 | 2.1105 | - | - |
| 0.3446 | 920 | 1.6375 | - | - |
| 0.3483 | 930 | 1.7065 | - | - |
| 0.3521 | 940 | 1.6578 | - | - |
| 0.3558 | 950 | 1.9871 | - | - |
| 0.3596 | 960 | 2.5589 | - | - |
| 0.3633 | 970 | 1.5536 | - | - |
| 0.3670 | 980 | 1.5662 | - | - |
| 0.3708 | 990 | 2.3202 | - | - |
| 0.3745 | 1000 | 1.6294 | - | - |
| 0.3783 | 1010 | 2.0777 | - | - |
| 0.3820 | 1020 | 1.5202 | - | - |
| 0.3858 | 1030 | 1.8365 | - | - |
| 0.3895 | 1040 | 0.9917 | - | - |
| 0.3933 | 1050 | 1.3668 | - | - |
| 0.3970 | 1060 | 1.0952 | - | - |
| 0.4007 | 1070 | 0.6225 | - | - |
| 0.4045 | 1080 | 0.9056 | - | - |
| 0.4082 | 1090 | 2.5108 | - | - |
| 0.4120 | 1100 | 0.8275 | - | - |
| 0.4157 | 1110 | 0.8328 | - | - |
| 0.4195 | 1120 | 1.6204 | - | - |
| 0.4232 | 1130 | 1.4578 | - | - |
| 0.4270 | 1140 | 0.985 | - | - |
| 0.4307 | 1150 | 1.5583 | - | - |
| 0.4345 | 1160 | 0.797 | - | - |
| 0.4382 | 1170 | 1.2212 | - | - |
| 0.4419 | 1180 | 1.3289 | - | - |
| 0.4457 | 1190 | 1.4719 | - | - |
| 0.4494 | 1200 | 0.9898 | - | - |
| 0.4532 | 1210 | 1.5724 | - | - |
| 0.4569 | 1220 | 2.4698 | - | - |
| 0.4607 | 1230 | 1.7312 | - | - |
| 0.4644 | 1240 | 0.8984 | - | - |
| 0.4682 | 1250 | 1.4435 | - | - |
| 0.4719 | 1260 | 0.4182 | - | - |
| 0.4757 | 1270 | 2.5585 | - | - |
| 0.4794 | 1280 | 2.1777 | - | - |
| 0.4831 | 1290 | 1.8817 | - | - |
| 0.4869 | 1300 | 1.3328 | - | - |
| 0.4906 | 1310 | 1.1548 | - | - |
| 0.4944 | 1320 | 1.8619 | - | - |
| 0.4981 | 1330 | 1.8818 | - | - |
| 0.5019 | 1340 | 1.2547 | - | - |
| 0.5056 | 1350 | 1.1262 | - | - |
| 0.5094 | 1360 | 2.4004 | - | - |
| 0.5131 | 1370 | 0.5397 | - | - |
| 0.5169 | 1380 | 1.1227 | - | - |
| 0.5206 | 1390 | 2.1331 | - | - |
| 0.5243 | 1400 | 0.8593 | - | - |
| 0.5281 | 1410 | 1.7893 | - | - |
| 0.5318 | 1420 | 0.5693 | - | - |
| 0.5356 | 1430 | 1.0304 | - | - |
| 0.5393 | 1440 | 0.7579 | - | - |
| 0.5431 | 1450 | 1.5615 | - | - |
| 0.5468 | 1460 | 0.6529 | - | - |
| 0.5506 | 1470 | 0.5767 | - | - |
| 0.5543 | 1480 | 1.3396 | - | - |
| 0.5581 | 1490 | 1.2152 | - | - |
| 0.5618 | 1500 | 0.8144 | - | - |
| 0.5655 | 1510 | 2.0135 | - | - |
| 0.5693 | 1520 | 2.5916 | - | - |
| 0.5730 | 1530 | 1.553 | - | - |
| 0.5768 | 1540 | 0.6537 | - | - |
| 0.5805 | 1550 | 0.7982 | - | - |
| 0.5843 | 1560 | 1.9476 | - | - |
| 0.5880 | 1570 | 0.6488 | - | - |
| 0.5918 | 1580 | 1.0492 | - | - |
| 0.5955 | 1590 | 1.7359 | - | - |
| 0.5993 | 1600 | 2.0695 | - | - |
| 0.6030 | 1610 | 0.7046 | - | - |
| 0.6067 | 1620 | 1.1444 | - | - |
| 0.6105 | 1630 | 0.9934 | - | - |
| 0.6142 | 1640 | 0.5541 | - | - |
| 0.6180 | 1650 | 0.9048 | - | - |
| 0.6217 | 1660 | 1.9154 | - | - |
| 0.6255 | 1670 | 2.3706 | - | - |
| 0.6292 | 1680 | 0.2856 | - | - |
| 0.6330 | 1690 | 1.0283 | - | - |
| 0.6367 | 1700 | 1.2681 | - | - |
| 0.6404 | 1710 | 0.9028 | - | - |
| 0.6442 | 1720 | 0.9902 | - | - |
| 0.6479 | 1730 | 1.3535 | - | - |
| 0.6517 | 1740 | 0.9419 | - | - |
| 0.6554 | 1750 | 0.9893 | - | - |
| 0.6592 | 1760 | 1.4345 | - | - |
| 0.6629 | 1770 | 2.1841 | - | - |
| 0.6667 | 1780 | 0.7408 | - | - |
| 0.6704 | 1790 | 2.4774 | - | - |
| 0.6742 | 1800 | 0.7757 | - | - |
| 0.6779 | 1810 | 2.0088 | - | - |
| 0.6816 | 1820 | 1.5048 | - | - |
| 0.6854 | 1830 | 0.9138 | - | - |
| 0.6891 | 1840 | 1.403 | - | - |
| 0.6929 | 1850 | 1.5927 | - | - |
| 0.6966 | 1860 | 1.0471 | - | - |
| 0.7004 | 1870 | 1.6628 | - | - |
| 0.7041 | 1880 | 0.6006 | - | - |
| 0.7079 | 1890 | 0.2351 | - | - |
| 0.7116 | 1900 | 0.9406 | - | - |
| 0.7154 | 1910 | 1.5868 | - | - |
| 0.7191 | 1920 | 1.1405 | - | - |
| 0.7228 | 1930 | 0.2823 | - | - |
| 0.7266 | 1940 | 1.7329 | - | - |
| 0.7303 | 1950 | 1.7973 | - | - |
| 0.7341 | 1960 | 0.9928 | - | - |
| 0.7378 | 1970 | 1.8539 | - | - |
| 0.7416 | 1980 | 1.7418 | - | - |
| 0.7453 | 1990 | 1.7236 | - | - |
| 0.7491 | 2000 | 0.7957 | - | - |
| 0.7528 | 2010 | 0.0987 | - | - |
| 0.7566 | 2020 | 1.7363 | - | - |
| 0.7603 | 2030 | 0.8135 | - | - |
| 0.7640 | 2040 | 1.7698 | - | - |
| 0.7678 | 2050 | 1.4394 | - | - |
| 0.7715 | 2060 | 0.7707 | - | - |
| 0.7753 | 2070 | 2.7317 | - | - |
| 0.7790 | 2080 | 0.3891 | - | - |
| 0.7828 | 2090 | 2.6116 | - | - |
| 0.7865 | 2100 | 1.1891 | - | - |
| 0.7903 | 2110 | 1.5366 | - | - |
| 0.7940 | 2120 | 0.4196 | - | - |
| 0.7978 | 2130 | 0.745 | - | - |
| 0.8015 | 2140 | 1.4042 | - | - |
| 0.8052 | 2150 | 2.7567 | - | - |
| 0.8090 | 2160 | 1.9903 | - | - |
| 0.8127 | 2170 | 1.8249 | - | - |
| 0.8165 | 2180 | 2.0049 | - | - |
| 0.8202 | 2190 | 1.6193 | - | - |
| 0.8240 | 2200 | 1.0768 | - | - |
| 0.8277 | 2210 | 1.5331 | - | - |
| 0.8315 | 2220 | 0.8109 | - | - |
| 0.8352 | 2230 | 0.6081 | - | - |
| 0.8390 | 2240 | 1.3533 | - | - |
| 0.8427 | 2250 | 2.0449 | - | - |
| 0.8464 | 2260 | 1.1876 | - | - |
| 0.8502 | 2270 | 0.7197 | - | - |
| 0.8539 | 2280 | 0.9462 | - | - |
| 0.8577 | 2290 | 0.7562 | - | - |
| 0.8614 | 2300 | 0.9699 | - | - |
| 0.8652 | 2310 | 1.115 | - | - |
| 0.8689 | 2320 | 0.9679 | - | - |
| 0.8727 | 2330 | 2.0255 | - | - |
| 0.8764 | 2340 | 0.7457 | - | - |
| 0.8801 | 2350 | 0.7221 | - | - |
| 0.8839 | 2360 | 1.4877 | - | - |
| 0.8876 | 2370 | 1.0071 | - | - |
| 0.8914 | 2380 | 1.0958 | - | - |
| 0.8951 | 2390 | 1.2945 | - | - |
| 0.8989 | 2400 | 1.5245 | - | - |
| 0.9026 | 2410 | 0.7008 | - | - |
| 0.9064 | 2420 | 1.5043 | - | - |
| 0.9101 | 2430 | 1.3202 | - | - |
| 0.9139 | 2440 | 1.2748 | - | - |
| 0.9176 | 2450 | 1.3845 | - | - |
| 0.9213 | 2460 | 1.2619 | - | - |
| 0.9251 | 2470 | 1.1196 | - | - |
| 0.9288 | 2480 | 2.1311 | - | - |
| 0.9326 | 2490 | 1.0909 | - | - |
| 0.9363 | 2500 | 2.2843 | - | - |
| 0.9401 | 2510 | 0.4763 | - | - |
| 0.9438 | 2520 | 1.4252 | - | - |
| 0.9476 | 2530 | 1.6419 | - | - |
| 0.9513 | 2540 | 1.6628 | - | - |
| 0.9551 | 2550 | 1.0937 | - | - |
| 0.9588 | 2560 | 2.9318 | - | - |
| 0.9625 | 2570 | 1.3356 | - | - |
| 0.9663 | 2580 | 1.6431 | - | - |
| 0.9700 | 2590 | 2.4785 | - | - |
| 0.9738 | 2600 | 2.0959 | - | - |
| 0.9775 | 2610 | 1.021 | - | - |
| 0.9813 | 2620 | 1.5177 | - | - |
| 0.9850 | 2630 | 1.1866 | - | - |
| 0.9888 | 2640 | 1.5121 | - | - |
| 0.9925 | 2650 | 0.9855 | - | - |
| 0.9963 | 2660 | 0.9257 | - | - |
| 1.0 | 2670 | 0.9811 | 1.0522 | 0.8691 |
| 1.0037 | 2680 | 0.4769 | - | - |
| 1.0075 | 2690 | 0.0788 | - | - |
| 1.0112 | 2700 | 0.4716 | - | - |
| 1.0150 | 2710 | 0.6434 | - | - |
| 1.0187 | 2720 | 0.3483 | - | - |
| 1.0225 | 2730 | 1.1349 | - | - |
| 1.0262 | 2740 | 1.4718 | - | - |
| 1.0300 | 2750 | 0.6267 | - | - |
| 1.0337 | 2760 | 0.7566 | - | - |
| 1.0375 | 2770 | 0.5439 | - | - |
| 1.0412 | 2780 | 1.3736 | - | - |
| 1.0449 | 2790 | 0.482 | - | - |
| 1.0487 | 2800 | 0.8668 | - | - |
| 1.0524 | 2810 | 1.5906 | - | - |
| 1.0562 | 2820 | 1.5024 | - | - |
| 1.0599 | 2830 | 1.1421 | - | - |
| 1.0637 | 2840 | 0.951 | - | - |
| 1.0674 | 2850 | 1.2362 | - | - |
| 1.0712 | 2860 | 0.9021 | - | - |
| 1.0749 | 2870 | 0.6175 | - | - |
| 1.0787 | 2880 | 0.5354 | - | - |
| 1.0824 | 2890 | 0.8739 | - | - |
| 1.0861 | 2900 | 1.2778 | - | - |
| 1.0899 | 2910 | 1.1148 | - | - |
| 1.0936 | 2920 | 1.2744 | - | - |
| 1.0974 | 2930 | 2.8342 | - | - |
| 1.1011 | 2940 | 0.8226 | - | - |
| 1.1049 | 2950 | 0.7788 | - | - |
| 1.1086 | 2960 | 0.2087 | - | - |
| 1.1124 | 2970 | 2.0295 | - | - |
| 1.1161 | 2980 | 0.7227 | - | - |
| 1.1199 | 2990 | 0.3996 | - | - |
| 1.1236 | 3000 | 1.081 | - | - |
| 1.1273 | 3010 | 1.1544 | - | - |
| 1.1311 | 3020 | 1.4191 | - | - |
| 1.1348 | 3030 | 0.9023 | - | - |
| 1.1386 | 3040 | 1.2946 | - | - |
| 1.1423 | 3050 | 0.7664 | - | - |
| 1.1461 | 3060 | 1.8775 | - | - |
| 1.1498 | 3070 | 1.1414 | - | - |
| 1.1536 | 3080 | 1.4882 | - | - |
| 1.1573 | 3090 | 0.9656 | - | - |
| 1.1610 | 3100 | 0.254 | - | - |
| 1.1648 | 3110 | 2.8362 | - | - |
| 1.1685 | 3120 | 1.5211 | - | - |
| 1.1723 | 3130 | 0.5995 | - | - |
| 1.1760 | 3140 | 1.192 | - | - |
| 1.1798 | 3150 | 0.5996 | - | - |
| 1.1835 | 3160 | 0.9875 | - | - |
| 1.1873 | 3170 | 0.9348 | - | - |
| 1.1910 | 3180 | 0.8946 | - | - |
| 1.1948 | 3190 | 1.2509 | - | - |
| 1.1985 | 3200 | 1.5223 | - | - |
| 1.2022 | 3210 | 1.6398 | - | - |
| 1.2060 | 3220 | 1.2502 | - | - |
| 1.2097 | 3230 | 1.713 | - | - |
| 1.2135 | 3240 | 0.2114 | - | - |
| 1.2172 | 3250 | 0.7086 | - | - |
| 1.2210 | 3260 | 1.3041 | - | - |
| 1.2247 | 3270 | 1.2593 | - | - |
| 1.2285 | 3280 | 0.4046 | - | - |
| 1.2322 | 3290 | 1.2122 | - | - |
| 1.2360 | 3300 | 1.3019 | - | - |
| 1.2397 | 3310 | 0.7197 | - | - |
| 1.2434 | 3320 | 0.6891 | - | - |
| 1.2472 | 3330 | 0.7012 | - | - |
| 1.2509 | 3340 | 1.0261 | - | - |
| 1.2547 | 3350 | 1.2433 | - | - |
| 1.2584 | 3360 | 0.1486 | - | - |
| 1.2622 | 3370 | 0.1235 | - | - |
| 1.2659 | 3380 | 1.5325 | - | - |
| 1.2697 | 3390 | 0.7763 | - | - |
| 1.2734 | 3400 | 1.6514 | - | - |
| 1.2772 | 3410 | 1.3432 | - | - |
| 1.2809 | 3420 | 0.9633 | - | - |
| 1.2846 | 3430 | 0.5197 | - | - |
| 1.2884 | 3440 | 1.5208 | - | - |
| 1.2921 | 3450 | 0.1065 | - | - |
| 1.2959 | 3460 | 1.158 | - | - |
| 1.2996 | 3470 | 0.1859 | - | - |
| 1.3034 | 3480 | 0.5727 | - | - |
| 1.3071 | 3490 | 0.4956 | - | - |
| 1.3109 | 3500 | 1.7412 | - | - |
| 1.3146 | 3510 | 1.0473 | - | - |
| 1.3184 | 3520 | 1.1178 | - | - |
| 1.3221 | 3530 | 2.0815 | - | - |
| 1.3258 | 3540 | 2.2776 | - | - |
| 1.3296 | 3550 | 0.7169 | - | - |
| 1.3333 | 3560 | 1.3027 | - | - |
| 1.3371 | 3570 | 1.7225 | - | - |
| 1.3408 | 3580 | 0.7588 | - | - |
| 1.3446 | 3590 | 0.7847 | - | - |
| 1.3483 | 3600 | 0.9037 | - | - |
| 1.3521 | 3610 | 1.3455 | - | - |
| 1.3558 | 3620 | 0.9022 | - | - |
| 1.3596 | 3630 | 0.1956 | - | - |
| 1.3633 | 3640 | 1.0445 | - | - |
| 1.3670 | 3650 | 0.8999 | - | - |
| 1.3708 | 3660 | 0.439 | - | - |
| 1.3745 | 3670 | 1.1256 | - | - |
| 1.3783 | 3680 | 0.8729 | - | - |
| 1.3820 | 3690 | 2.2068 | - | - |
| 1.3858 | 3700 | 1.6487 | - | - |
| 1.3895 | 3710 | 0.9254 | - | - |
| 1.3933 | 3720 | 0.2883 | - | - |
| 1.3970 | 3730 | 0.8981 | - | - |
| 1.4007 | 3740 | 1.2252 | - | - |
| 1.4045 | 3750 | 0.8682 | - | - |
| 1.4082 | 3760 | 0.8365 | - | - |
| 1.4120 | 3770 | 1.8876 | - | - |
| 1.4157 | 3780 | 0.6073 | - | - |
| 1.4195 | 3790 | 0.9617 | - | - |
| 1.4232 | 3800 | 0.2706 | - | - |
| 1.4270 | 3810 | 0.3518 | - | - |
| 1.4307 | 3820 | 1.1181 | - | - |
| 1.4345 | 3830 | 1.2088 | - | - |
| 1.4382 | 3840 | 0.8219 | - | - |
| 1.4419 | 3850 | 1.0337 | - | - |
| 1.4457 | 3860 | 1.5798 | - | - |
| 1.4494 | 3870 | 0.293 | - | - |
| 1.4532 | 3880 | 0.577 | - | - |
| 1.4569 | 3890 | 1.1591 | - | - |
| 1.4607 | 3900 | 0.677 | - | - |
| 1.4644 | 3910 | 0.2807 | - | - |
| 1.4682 | 3920 | 0.8355 | - | - |
| 1.4719 | 3930 | 1.1842 | - | - |
| 1.4757 | 3940 | 1.1249 | - | - |
| 1.4794 | 3950 | 0.9494 | - | - |
| 1.4831 | 3960 | 0.3435 | - | - |
| 1.4869 | 3970 | 0.491 | - | - |
| 1.4906 | 3980 | 0.024 | - | - |
| 1.4944 | 3990 | 0.4431 | - | - |
| 1.4981 | 4000 | 0.3127 | - | - |
| 1.5019 | 4010 | 1.1624 | - | - |
| 1.5056 | 4020 | 0.7637 | - | - |
| 1.5094 | 4030 | 0.2917 | - | - |
| 1.5131 | 4040 | 0.5337 | - | - |
| 1.5169 | 4050 | 0.4679 | - | - |
| 1.5206 | 4060 | 1.1765 | - | - |
| 1.5243 | 4070 | 1.5454 | - | - |
| 1.5281 | 4080 | 1.1035 | - | - |
| 1.5318 | 4090 | 0.4787 | - | - |
| 1.5356 | 4100 | 1.1475 | - | - |
| 1.5393 | 4110 | 2.5765 | - | - |
| 1.5431 | 4120 | 0.8925 | - | - |
| 1.5468 | 4130 | 1.1461 | - | - |
| 1.5506 | 4140 | 1.0587 | - | - |
| 1.5543 | 4150 | 0.8122 | - | - |
| 1.5581 | 4160 | 1.197 | - | - |
| 1.5618 | 4170 | 1.5496 | - | - |
| 1.5655 | 4180 | 0.5243 | - | - |
| 1.5693 | 4190 | 1.1577 | - | - |
| 1.5730 | 4200 | 0.8121 | - | - |
| 1.5768 | 4210 | 0.623 | - | - |
| 1.5805 | 4220 | 0.7428 | - | - |
| 1.5843 | 4230 | 1.3538 | - | - |
| 1.5880 | 4240 | 0.5452 | - | - |
| 1.5918 | 4250 | 0.6693 | - | - |
| 1.5955 | 4260 | 0.5567 | - | - |
| 1.5993 | 4270 | 1.1811 | - | - |
| 1.6030 | 4280 | 0.5026 | - | - |
| 1.6067 | 4290 | 0.8282 | - | - |
| 1.6105 | 4300 | 1.3515 | - | - |
| 1.6142 | 4310 | 1.0876 | - | - |
| 1.6180 | 4320 | 1.3355 | - | - |
| 1.6217 | 4330 | 0.7432 | - | - |
| 1.6255 | 4340 | 0.7268 | - | - |
| 1.6292 | 4350 | 2.156 | - | - |
| 1.6330 | 4360 | 0.5804 | - | - |
| 1.6367 | 4370 | 0.5645 | - | - |
| 1.6404 | 4380 | 0.3972 | - | - |
| 1.6442 | 4390 | 0.3717 | - | - |
| 1.6479 | 4400 | 0.3682 | - | - |
| 1.6517 | 4410 | 0.8165 | - | - |
| 1.6554 | 4420 | 0.4629 | - | - |
| 1.6592 | 4430 | 0.4669 | - | - |
| 1.6629 | 4440 | 1.4872 | - | - |
| 1.6667 | 4450 | 0.0391 | - | - |
| 1.6704 | 4460 | 0.5723 | - | - |
| 1.6742 | 4470 | 0.1429 | - | - |
| 1.6779 | 4480 | 1.3683 | - | - |
| 1.6816 | 4490 | 0.2154 | - | - |
| 1.6854 | 4500 | 0.486 | - | - |
| 1.6891 | 4510 | 0.57 | - | - |
| 1.6929 | 4520 | 0.4862 | - | - |
| 1.6966 | 4530 | 0.7939 | - | - |
| 1.7004 | 4540 | 1.6848 | - | - |
| 1.7041 | 4550 | 0.7317 | - | - |
| 1.7079 | 4560 | 0.9226 | - | - |
| 1.7116 | 4570 | 0.9461 | - | - |
| 1.7154 | 4580 | 0.5289 | - | - |
| 1.7191 | 4590 | 0.9467 | - | - |
| 1.7228 | 4600 | 0.4374 | - | - |
| 1.7266 | 4610 | 0.8408 | - | - |
| 1.7303 | 4620 | 0.7935 | - | - |
| 1.7341 | 4630 | 0.8529 | - | - |
| 1.7378 | 4640 | 0.9103 | - | - |
| 1.7416 | 4650 | 0.8169 | - | - |
| 1.7453 | 4660 | 0.7316 | - | - |
| 1.7491 | 4670 | 0.3014 | - | - |
| 1.7528 | 4680 | 1.0149 | - | - |
| 1.7566 | 4690 | 1.1554 | - | - |
| 1.7603 | 4700 | 0.9175 | - | - |
| 1.7640 | 4710 | 0.332 | - | - |
| 1.7678 | 4720 | 1.0431 | - | - |
| 1.7715 | 4730 | 0.4539 | - | - |
| 1.7753 | 4740 | 0.3434 | - | - |
| 1.7790 | 4750 | 1.6847 | - | - |
| 1.7828 | 4760 | 0.6125 | - | - |
| 1.7865 | 4770 | 0.6509 | - | - |
| 1.7903 | 4780 | 2.1171 | - | - |
| 1.7940 | 4790 | 0.1296 | - | - |
| 1.7978 | 4800 | 0.8468 | - | - |
| 1.8015 | 4810 | 0.8887 | - | - |
| 1.8052 | 4820 | 0.475 | - | - |
| 1.8090 | 4830 | 1.1306 | - | - |
| 1.8127 | 4840 | 1.56 | - | - |
| 1.8165 | 4850 | 1.446 | - | - |
| 1.8202 | 4860 | 1.1175 | - | - |
| 1.8240 | 4870 | 1.5735 | - | - |
| 1.8277 | 4880 | 1.749 | - | - |
| 1.8315 | 4890 | 0.6597 | - | - |
| 1.8352 | 4900 | 0.8736 | - | - |
| 1.8390 | 4910 | 0.2586 | - | - |
| 1.8427 | 4920 | 1.0175 | - | - |
| 1.8464 | 4930 | 1.0651 | - | - |
| 1.8502 | 4940 | 0.3644 | - | - |
| 1.8539 | 4950 | 0.7849 | - | - |
| 1.8577 | 4960 | 1.4129 | - | - |
| 1.8614 | 4970 | 1.3896 | - | - |
| 1.8652 | 4980 | 0.5037 | - | - |
| 1.8689 | 4990 | 0.2482 | - | - |
| 1.8727 | 5000 | 1.1326 | - | - |
| 1.8764 | 5010 | 0.7214 | - | - |
| 1.8801 | 5020 | 0.7837 | - | - |
| 1.8839 | 5030 | 1.9915 | - | - |
| 1.8876 | 5040 | 1.0516 | - | - |
| 1.8914 | 5050 | 0.8879 | - | - |
| 1.8951 | 5060 | 1.6854 | - | - |
| 1.8989 | 5070 | 1.313 | - | - |
| 1.9026 | 5080 | 0.5719 | - | - |
| 1.9064 | 5090 | 0.2045 | - | - |
| 1.9101 | 5100 | 0.4238 | - | - |
| 1.9139 | 5110 | 0.8916 | - | - |
| 1.9176 | 5120 | 0.9572 | - | - |
| 1.9213 | 5130 | 0.9926 | - | - |
| 1.9251 | 5140 | 1.3111 | - | - |
| 1.9288 | 5150 | 0.7925 | - | - |
| 1.9326 | 5160 | 0.8453 | - | - |
| 1.9363 | 5170 | 0.2731 | - | - |
| 1.9401 | 5180 | 1.3019 | - | - |
| 1.9438 | 5190 | 1.2677 | - | - |
| 1.9476 | 5200 | 1.5136 | - | - |
| 1.9513 | 5210 | 1.4283 | - | - |
| 1.9551 | 5220 | 1.4765 | - | - |
| 1.9588 | 5230 | 0.3049 | - | - |
| 1.9625 | 5240 | 0.988 | - | - |
| 1.9663 | 5250 | 1.7154 | - | - |
| 1.9700 | 5260 | 0.5865 | - | - |
| 1.9738 | 5270 | 0.8685 | - | - |
| 1.9775 | 5280 | 2.1119 | - | - |
| 1.9813 | 5290 | 1.6986 | - | - |
| 1.9850 | 5300 | 0.9968 | - | - |
| 1.9888 | 5310 | 0.6045 | - | - |
| 1.9925 | 5320 | 0.7844 | - | - |
| 1.9963 | 5330 | 0.7483 | - | - |
| 2.0 | 5340 | 2.4421 | 0.8997 | 0.8694 |
| 2.0037 | 5350 | 0.3721 | - | - |
| 2.0075 | 5360 | 0.7311 | - | - |
| 2.0112 | 5370 | 0.4219 | - | - |
| 2.0150 | 5380 | 0.5756 | - | - |
| 2.0187 | 5390 | 0.2848 | - | - |
| 2.0225 | 5400 | 0.7341 | - | - |
| 2.0262 | 5410 | 0.4964 | - | - |
| 2.0300 | 5420 | 0.1535 | - | - |
| 2.0337 | 5430 | 0.4309 | - | - |
| 2.0375 | 5440 | 0.3544 | - | - |
| 2.0412 | 5450 | 0.2336 | - | - |
| 2.0449 | 5460 | 1.212 | - | - |
| 2.0487 | 5470 | 0.5154 | - | - |
| 2.0524 | 5480 | 0.1163 | - | - |
| 2.0562 | 5490 | 0.9765 | - | - |
| 2.0599 | 5500 | 0.2086 | - | - |
| 2.0637 | 5510 | 0.2978 | - | - |
| 2.0674 | 5520 | 1.9357 | - | - |
| 2.0712 | 5530 | 0.6232 | - | - |
| 2.0749 | 5540 | 0.6823 | - | - |
| 2.0787 | 5550 | 0.0296 | - | - |
| 2.0824 | 5560 | 0.9172 | - | - |
| 2.0861 | 5570 | 0.3007 | - | - |
| 2.0899 | 5580 | 0.4675 | - | - |
| 2.0936 | 5590 | 0.1491 | - | - |
| 2.0974 | 5600 | 1.1711 | - | - |
| 2.1011 | 5610 | 0.6131 | - | - |
| 2.1049 | 5620 | 0.0001 | - | - |
| 2.1086 | 5630 | 0.408 | - | - |
| 2.1124 | 5640 | 0.0041 | - | - |
| 2.1161 | 5650 | 0.2059 | - | - |
| 2.1199 | 5660 | 0.675 | - | - |
| 2.1236 | 5670 | 0.6992 | - | - |
| 2.1273 | 5680 | 0.3526 | - | - |
| 2.1311 | 5690 | 0.2875 | - | - |
| 2.1348 | 5700 | 0.6462 | - | - |
| 2.1386 | 5710 | 0.3409 | - | - |
| 2.1423 | 5720 | 0.4659 | - | - |
| 2.1461 | 5730 | 0.4404 | - | - |
| 2.1498 | 5740 | 0.9744 | - | - |
| 2.1536 | 5750 | 0.5018 | - | - |
| 2.1573 | 5760 | 0.5624 | - | - |
| 2.1610 | 5770 | 0.9155 | - | - |
| 2.1648 | 5780 | 0.1129 | - | - |
| 2.1685 | 5790 | 0.0347 | - | - |
| 2.1723 | 5800 | 1.0591 | - | - |
| 2.1760 | 5810 | 0.0432 | - | - |
| 2.1798 | 5820 | 0.041 | - | - |
| 2.1835 | 5830 | 0.2072 | - | - |
| 2.1873 | 5840 | 0.3379 | - | - |
| 2.1910 | 5850 | 0.3583 | - | - |
| 2.1948 | 5860 | 0.4276 | - | - |
| 2.1985 | 5870 | 0.4756 | - | - |
| 2.2022 | 5880 | 1.0452 | - | - |
| 2.2060 | 5890 | 0.1116 | - | - |
| 2.2097 | 5900 | 0.199 | - | - |
| 2.2135 | 5910 | 0.8664 | - | - |
| 2.2172 | 5920 | 0.5747 | - | - |
| 2.2210 | 5930 | 1.655 | - | - |
| 2.2247 | 5940 | 0.8386 | - | - |
| 2.2285 | 5950 | 0.5677 | - | - |
| 2.2322 | 5960 | 0.3334 | - | - |
| 2.2360 | 5970 | 0.3344 | - | - |
| 2.2397 | 5980 | 0.7777 | - | - |
| 2.2434 | 5990 | 0.359 | - | - |
| 2.2472 | 6000 | 0.8495 | - | - |
| 2.2509 | 6010 | 1.5789 | - | - |
| 2.2547 | 6020 | 0.3153 | - | - |
| 2.2584 | 6030 | 1.2525 | - | - |
| 2.2622 | 6040 | 0.9709 | - | - |
| 2.2659 | 6050 | 0.6815 | - | - |
| 2.2697 | 6060 | 0.5785 | - | - |
| 2.2734 | 6070 | 0.605 | - | - |
| 2.2772 | 6080 | 0.7768 | - | - |
| 2.2809 | 6090 | 0.2885 | - | - |
| 2.2846 | 6100 | 0.0714 | - | - |
| 2.2884 | 6110 | 1.5183 | - | - |
| 2.2921 | 6120 | 0.8578 | - | - |
| 2.2959 | 6130 | 0.4251 | - | - |
| 2.2996 | 6140 | 1.1994 | - | - |
| 2.3034 | 6150 | 0.5432 | - | - |
| 2.3071 | 6160 | 1.3702 | - | - |
| 2.3109 | 6170 | 1.3417 | - | - |
| 2.3146 | 6180 | 0.9556 | - | - |
| 2.3184 | 6190 | 0.3698 | - | - |
| 2.3221 | 6200 | 0.6896 | - | - |
| 2.3258 | 6210 | 0.6184 | - | - |
| 2.3296 | 6220 | 0.2338 | - | - |
| 2.3333 | 6230 | 0.342 | - | - |
| 2.3371 | 6240 | 0.8367 | - | - |
| 2.3408 | 6250 | 1.1024 | - | - |
| 2.3446 | 6260 | 1.0722 | - | - |
| 2.3483 | 6270 | 0.479 | - | - |
| 2.3521 | 6280 | 0.8954 | - | - |
| 2.3558 | 6290 | 0.8496 | - | - |
| 2.3596 | 6300 | 1.2729 | - | - |
| 2.3633 | 6310 | 0.1914 | - | - |
| 2.3670 | 6320 | 0.8034 | - | - |
| 2.3708 | 6330 | 0.9549 | - | - |
| 2.3745 | 6340 | 1.1065 | - | - |
| 2.3783 | 6350 | 1.5595 | - | - |
| 2.3820 | 6360 | 0.2028 | - | - |
| 2.3858 | 6370 | 0.2768 | - | - |
| 2.3895 | 6380 | 1.6269 | - | - |
| 2.3933 | 6390 | 0.8796 | - | - |
| 2.3970 | 6400 | 0.6491 | - | - |
| 2.4007 | 6410 | 0.8528 | - | - |
| 2.4045 | 6420 | 0.9181 | - | - |
| 2.4082 | 6430 | 1.2277 | - | - |
| 2.4120 | 6440 | 0.7166 | - | - |
| 2.4157 | 6450 | 1.0847 | - | - |
| 2.4195 | 6460 | 0.5212 | - | - |
| 2.4232 | 6470 | 0.0076 | - | - |
| 2.4270 | 6480 | 0.7151 | - | - |
| 2.4307 | 6490 | 0.4312 | - | - |
| 2.4345 | 6500 | 0.7043 | - | - |
| 2.4382 | 6510 | 0.3567 | - | - |
| 2.4419 | 6520 | 1.5934 | - | - |
| 2.4457 | 6530 | 0.3477 | - | - |
| 2.4494 | 6540 | 0.218 | - | - |
| 2.4532 | 6550 | 1.3359 | - | - |
| 2.4569 | 6560 | 1.4008 | - | - |
| 2.4607 | 6570 | 0.3827 | - | - |
| 2.4644 | 6580 | 0.4915 | - | - |
| 2.4682 | 6590 | 0.8245 | - | - |
| 2.4719 | 6600 | 0.7731 | - | - |
| 2.4757 | 6610 | 0.5343 | - | - |
| 2.4794 | 6620 | 0.9325 | - | - |
| 2.4831 | 6630 | 1.1319 | - | - |
| 2.4869 | 6640 | 0.3839 | - | - |
| 2.4906 | 6650 | 1.1552 | - | - |
| 2.4944 | 6660 | 1.2688 | - | - |
| 2.4981 | 6670 | 1.3912 | - | - |
| 2.5019 | 6680 | 1.1167 | - | - |
| 2.5056 | 6690 | 0.9472 | - | - |
| 2.5094 | 6700 | 0.2292 | - | - |
| 2.5131 | 6710 | 0.9339 | - | - |
| 2.5169 | 6720 | 1.5937 | - | - |
| 2.5206 | 6730 | 0.7239 | - | - |
| 2.5243 | 6740 | 0.3039 | - | - |
| 2.5281 | 6750 | 0.7881 | - | - |
| 2.5318 | 6760 | 1.1695 | - | - |
| 2.5356 | 6770 | 0.5547 | - | - |
| 2.5393 | 6780 | 1.5877 | - | - |
| 2.5431 | 6790 | 1.1356 | - | - |
| 2.5468 | 6800 | 2.0128 | - | - |
| 2.5506 | 6810 | 0.4398 | - | - |
| 2.5543 | 6820 | 0.2939 | - | - |
| 2.5581 | 6830 | 0.3846 | - | - |
| 2.5618 | 6840 | 0.5616 | - | - |
| 2.5655 | 6850 | 0.2842 | - | - |
| 2.5693 | 6860 | 0.5759 | - | - |
| 2.5730 | 6870 | 0.9994 | - | - |
| 2.5768 | 6880 | 0.7186 | - | - |
| 2.5805 | 6890 | 0.6981 | - | - |
| 2.5843 | 6900 | 0.9311 | - | - |
| 2.5880 | 6910 | 0.5467 | - | - |
| 2.5918 | 6920 | 0.4206 | - | - |
| 2.5955 | 6930 | 1.2741 | - | - |
| 2.5993 | 6940 | 0.6233 | - | - |
| 2.6030 | 6950 | 1.0366 | - | - |
| 2.6067 | 6960 | 1.3207 | - | - |
| 2.6105 | 6970 | 0.8609 | - | - |
| 2.6142 | 6980 | 0.1173 | - | - |
| 2.6180 | 6990 | 0.2675 | - | - |
| 2.6217 | 7000 | 0.223 | - | - |
| 2.6255 | 7010 | 1.3045 | - | - |
| 2.6292 | 7020 | 0.5441 | - | - |
| 2.6330 | 7030 | 0.7044 | - | - |
| 2.6367 | 7040 | 0.3714 | - | - |
| 2.6404 | 7050 | 1.1058 | - | - |
| 2.6442 | 7060 | 0.7746 | - | - |
| 2.6479 | 7070 | 0.447 | - | - |
| 2.6517 | 7080 | 0.2873 | - | - |
| 2.6554 | 7090 | 0.4244 | - | - |
| 2.6592 | 7100 | 0.1731 | - | - |
| 2.6629 | 7110 | 0.6776 | - | - |
| 2.6667 | 7120 | 0.7168 | - | - |
| 2.6704 | 7130 | 0.3992 | - | - |
| 2.6742 | 7140 | 1.2154 | - | - |
| 2.6779 | 7150 | 0.0108 | - | - |
| 2.6816 | 7160 | 0.1631 | - | - |
| 2.6854 | 7170 | 0.8925 | - | - |
| 2.6891 | 7180 | 0.4143 | - | - |
| 2.6929 | 7190 | 0.7408 | - | - |
| 2.6966 | 7200 | 0.385 | - | - |
| 2.7004 | 7210 | 0.2634 | - | - |
| 2.7041 | 7220 | 0.5994 | - | - |
| 2.7079 | 7230 | 1.6884 | - | - |
| 2.7116 | 7240 | 0.8603 | - | - |
| 2.7154 | 7250 | 1.6581 | - | - |
| 2.7191 | 7260 | 1.9287 | - | - |
| 2.7228 | 7270 | 0.7015 | - | - |
| 2.7266 | 7280 | 0.5795 | - | - |
| 2.7303 | 7290 | 0.3925 | - | - |
| 2.7341 | 7300 | 0.951 | - | - |
| 2.7378 | 7310 | 1.3277 | - | - |
| 2.7416 | 7320 | 0.8554 | - | - |
| 2.7453 | 7330 | 1.7586 | - | - |
| 2.7491 | 7340 | 0.6161 | - | - |
| 2.7528 | 7350 | 0.4676 | - | - |
| 2.7566 | 7360 | 1.1006 | - | - |
| 2.7603 | 7370 | 3.297 | - | - |
| 2.7640 | 7380 | 0.9984 | - | - |
| 2.7678 | 7390 | 0.3403 | - | - |
| 2.7715 | 7400 | 1.1086 | - | - |
| 2.7753 | 7410 | 2.0014 | - | - |
| 2.7790 | 7420 | 0.9262 | - | - |
| 2.7828 | 7430 | 0.3025 | - | - |
| 2.7865 | 7440 | 1.5384 | - | - |
| 2.7903 | 7450 | 0.3463 | - | - |
| 2.7940 | 7460 | 0.4185 | - | - |
| 2.7978 | 7470 | 0.2442 | - | - |
| 2.8015 | 7480 | 0.5406 | - | - |
| 2.8052 | 7490 | 1.0035 | - | - |
| 2.8090 | 7500 | 0.6287 | - | - |
| 2.8127 | 7510 | 0.8633 | - | - |
| 2.8165 | 7520 | 0.3335 | - | - |
| 2.8202 | 7530 | 0.5338 | - | - |
| 2.8240 | 7540 | 1.2167 | - | - |
| 2.8277 | 7550 | 0.2822 | - | - |
| 2.8315 | 7560 | 1.4419 | - | - |
| 2.8352 | 7570 | 0.4631 | - | - |
| 2.8390 | 7580 | 1.0021 | - | - |
| 2.8427 | 7590 | 0.7075 | - | - |
| 2.8464 | 7600 | 0.3762 | - | - |
| 2.8502 | 7610 | 0.6409 | - | - |
| 2.8539 | 7620 | 0.87 | - | - |
| 2.8577 | 7630 | 0.5116 | - | - |
| 2.8614 | 7640 | 0.5592 | - | - |
| 2.8652 | 7650 | 1.3226 | - | - |
| 2.8689 | 7660 | 0.7833 | - | - |
| 2.8727 | 7670 | 0.1119 | - | - |
| 2.8764 | 7680 | 0.8755 | - | - |
| 2.8801 | 7690 | 0.9209 | - | - |
| 2.8839 | 7700 | 1.042 | - | - |
| 2.8876 | 7710 | 0.261 | - | - |
| 2.8914 | 7720 | 1.2104 | - | - |
| 2.8951 | 7730 | 0.2092 | - | - |
| 2.8989 | 7740 | 0.8415 | - | - |
| 2.9026 | 7750 | 0.8367 | - | - |
| 2.9064 | 7760 | 1.4419 | - | - |
| 2.9101 | 7770 | 0.6172 | - | - |
| 2.9139 | 7780 | 0.5117 | - | - |
| 2.9176 | 7790 | 1.3995 | - | - |
| 2.9213 | 7800 | 0.3404 | - | - |
| 2.9251 | 7810 | 0.5748 | - | - |
| 2.9288 | 7820 | 0.9675 | - | - |
| 2.9326 | 7830 | 0.385 | - | - |
| 2.9363 | 7840 | 0.0263 | - | - |
| 2.9401 | 7850 | 0.6896 | - | - |
| 2.9438 | 7860 | 0.2887 | - | - |
| 2.9476 | 7870 | 1.1583 | - | - |
| 2.9513 | 7880 | 0.5359 | - | - |
| 2.9551 | 7890 | 0.1456 | - | - |
| 2.9588 | 7900 | 0.3246 | - | - |
| 2.9625 | 7910 | 1.0288 | - | - |
| 2.9663 | 7920 | 0.2144 | - | - |
| 2.9700 | 7930 | 1.2785 | - | - |
| 2.9738 | 7940 | 1.2584 | - | - |
| 2.9775 | 7950 | 0.7309 | - | - |
| 2.9813 | 7960 | 0.1397 | - | - |
| 2.9850 | 7970 | 0.2638 | - | - |
| 2.9888 | 7980 | 0.7613 | - | - |
| 2.9925 | 7990 | 0.3807 | - | - |
| 2.9963 | 8000 | 1.2176 | - | - |
| 3.0 | 8010 | 1.2772 | 0.9109 | 0.8725 |
</details>
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.3.1
- Transformers: 4.48.0.dev0
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.1.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### BatchHardTripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CRAFT"
] | Non_BioNLP |
tmmazen/bge-base-st-phyto | tmmazen | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:806",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,726 | 1,726 | 5 | 0 | ---
base_model: BAAI/bge-base-en-v1.5
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:806
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: La posologie habituelle est de 1 à 2 g de plante séchée par tasse
d’eau bouillante, en infusion, plusieurs fois par jour, ou l’équivalent en herbe
fraîche. Pour la prise en extrait liquide (sirop), la posologie est calculée en
fonction de la concentration en principes actifs. Une tisane de thym ou une goutte
d’huile essentielle diluée dans un bol d’eau chaude peuvent également être utilisées
en inhalation pour dégager les voies respiratoires. Des infusions plus concentrées
(5 g de plante sèche pour 100 ml d’eau) sont employées en gargarisme ou en bain
de bouche. Les mélanges feuilles de thym / racine de primevère sont à utiliser
selon les recommandations de la notice du produit.
sentences:
- La plante Bourdaine est-elle sans danger pendant la grossesse et l'allaitement?
- Quelles sont les formes et le dosage recommandés pour la plante Thym?
- Quel est l'avis d'un expert sur la plante Valériane?
- source_sentence: L’allergie reconnue au fenouil ou à l’anéthol, ou à toute plante
apparentée comme le céleri, le cumin, la coriandre, l’anis vert ou l’aneth, en
contre-indique l’utilisation. Les personnes allergiques aux plantes de la famille
des astéracées (anciennement famille des composées) comme l’armoise, l’ambroisie
ou la camomille, doivent également utiliser le fenouil avec prudence. De plus,
les femmes qui souffrent ou ont souffert de cancer hormonodépendant (cancer du
sein, des ovaires ou de l'utérus) doivent s'abstenir de prendre un traitement
à base de fenouil, du fait des propriétés estrogéniques de cette plante.
sentences:
- Quelle est la description de la plante Saule blanc?
- Quelles sont les contre-indications pour la plante Petite centaurée?
- Quelles sont les contre-indications pour la plante Fenouil?
- source_sentence: En tisane, la dose recommandée est de 3 g de graines écrasées en
infusion, dans 150 ml d’eau frémissante, trois fois par jour. Pour soulager des
troubles digestifs, il est possible de mâcher une petite cuillerée de graines
à la fin du repas. L’huile essentielle d’anis vert peut être utilisée en inhalation
lors de rhume ou de bronchite. Il suffit de diluer quelques gouttes d’huile essentielle
dans un bol d’eau bouillante et d’inhaler les vapeurs. Dans tous les cas, si les
symptômes persistent au-delà de 48 heures, une consultation médicale est nécessaire.
sentences:
- Quelles sont les formes et le dosage recommandés pour la plante Anis vert?
- Quelle est la description de la plante Fenouil?
- Comment utiliser la plante Valériane?
- source_sentence: Les produits à base de lierre grimpant pourraient interagir avec
les médicaments de la toux (antitussifs).
sentences:
- Quelles sont les interactions possibles de la plante Lierre grimpant avec d’autres
substances?
- Quelles sont les contre-indications pour la plante Ginseng?
- Que pensent les autorités de santé de la plante Bouillon-blanc?
- source_sentence: Les personnes qui prennent des anticoagulants (fluidifiants du
sang) devraient s’abstenir de consommer de grandes quantités de canneberge. De
plus, il est préférable de ne pas prendre de fortes doses de canneberge avec d’autres
plantes anticoagulantes (par exemple l’ail, le ginkgo, le ginseng, l’éleuthérocoque,
le saule blanc, le kava, la fève tonka, etc.), ni avec des produits contenant
des acides gras oméga-3. La canneberge pourrait également interagir avec les médicaments
prescrits pour lutter contre les brûlures d’estomac (antihistaminiques H2 et inhibiteurs
de la pompe à protons).
sentences:
- Quelle est l'efficacité de la plante Bouillon-blanc?
- Quelle est l'origine et quels sont les usages de la plante Petite centaurée?
- Quelles sont les interactions possibles de la plante Canneberge ou cranberry avec
d’autres substances?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.32222222222222224
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6333333333333333
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8111111111111111
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.32222222222222224
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16666666666666666
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12666666666666665
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0811111111111111
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.32222222222222224
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6333333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8111111111111111
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5423585397564108
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4590123456790124
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4692052653121261
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.28888888888888886
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6333333333333333
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8333333333333334
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.28888888888888886
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16666666666666666
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12666666666666665
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08333333333333331
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.28888888888888886
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6333333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8333333333333334
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5318522788264519
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4391534391534392
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.448465509528844
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.3
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6222222222222222
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8333333333333334
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16666666666666666
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12444444444444444
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08333333333333331
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6222222222222222
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8333333333333334
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5412674141664099
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4508994708994711
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4591974099649539
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.2777777777777778
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4888888888888889
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6111111111111112
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8111111111111111
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2777777777777778
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16296296296296295
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1222222222222222
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0811111111111111
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2777777777777778
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4888888888888889
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6111111111111112
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8111111111111111
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5157896241121187
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4248500881834215
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4341048300656952
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.2
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.37777777777777777
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5444444444444444
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7444444444444445
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1259259259259259
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10888888888888888
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07444444444444444
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.2
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.37777777777777777
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5444444444444444
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7444444444444445
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4349093331984765
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3400573192239858
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.35002712945861025
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tmmazen/bge-base-st-phyto")
# Run inference
sentences = [
'Les personnes qui prennent des anticoagulants (fluidifiants du sang) devraient s’abstenir de consommer de grandes quantités de canneberge. De plus, il est préférable de ne pas prendre de fortes doses de canneberge avec d’autres plantes anticoagulantes (par exemple l’ail, le ginkgo, le ginseng, l’éleuthérocoque, le saule blanc, le kava, la fève tonka, etc.), ni avec des produits contenant des acides gras oméga-3. La canneberge pourrait également interagir avec les médicaments prescrits pour lutter contre les brûlures d’estomac (antihistaminiques H2 et inhibiteurs de la pompe à protons).',
'Quelles sont les interactions possibles de la plante Canneberge ou cranberry avec d’autres substances?',
"Quelle est l'efficacité de la plante Bouillon-blanc?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3222 |
| cosine_accuracy@3 | 0.5 |
| cosine_accuracy@5 | 0.6333 |
| cosine_accuracy@10 | 0.8111 |
| cosine_precision@1 | 0.3222 |
| cosine_precision@3 | 0.1667 |
| cosine_precision@5 | 0.1267 |
| cosine_precision@10 | 0.0811 |
| cosine_recall@1 | 0.3222 |
| cosine_recall@3 | 0.5 |
| cosine_recall@5 | 0.6333 |
| cosine_recall@10 | 0.8111 |
| cosine_ndcg@10 | 0.5424 |
| cosine_mrr@10 | 0.459 |
| **cosine_map@100** | **0.4692** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2889 |
| cosine_accuracy@3 | 0.5 |
| cosine_accuracy@5 | 0.6333 |
| cosine_accuracy@10 | 0.8333 |
| cosine_precision@1 | 0.2889 |
| cosine_precision@3 | 0.1667 |
| cosine_precision@5 | 0.1267 |
| cosine_precision@10 | 0.0833 |
| cosine_recall@1 | 0.2889 |
| cosine_recall@3 | 0.5 |
| cosine_recall@5 | 0.6333 |
| cosine_recall@10 | 0.8333 |
| cosine_ndcg@10 | 0.5319 |
| cosine_mrr@10 | 0.4392 |
| **cosine_map@100** | **0.4485** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3 |
| cosine_accuracy@3 | 0.5 |
| cosine_accuracy@5 | 0.6222 |
| cosine_accuracy@10 | 0.8333 |
| cosine_precision@1 | 0.3 |
| cosine_precision@3 | 0.1667 |
| cosine_precision@5 | 0.1244 |
| cosine_precision@10 | 0.0833 |
| cosine_recall@1 | 0.3 |
| cosine_recall@3 | 0.5 |
| cosine_recall@5 | 0.6222 |
| cosine_recall@10 | 0.8333 |
| cosine_ndcg@10 | 0.5413 |
| cosine_mrr@10 | 0.4509 |
| **cosine_map@100** | **0.4592** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.2778 |
| cosine_accuracy@3 | 0.4889 |
| cosine_accuracy@5 | 0.6111 |
| cosine_accuracy@10 | 0.8111 |
| cosine_precision@1 | 0.2778 |
| cosine_precision@3 | 0.163 |
| cosine_precision@5 | 0.1222 |
| cosine_precision@10 | 0.0811 |
| cosine_recall@1 | 0.2778 |
| cosine_recall@3 | 0.4889 |
| cosine_recall@5 | 0.6111 |
| cosine_recall@10 | 0.8111 |
| cosine_ndcg@10 | 0.5158 |
| cosine_mrr@10 | 0.4249 |
| **cosine_map@100** | **0.4341** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:---------|
| cosine_accuracy@1 | 0.2 |
| cosine_accuracy@3 | 0.3778 |
| cosine_accuracy@5 | 0.5444 |
| cosine_accuracy@10 | 0.7444 |
| cosine_precision@1 | 0.2 |
| cosine_precision@3 | 0.1259 |
| cosine_precision@5 | 0.1089 |
| cosine_precision@10 | 0.0744 |
| cosine_recall@1 | 0.2 |
| cosine_recall@3 | 0.3778 |
| cosine_recall@5 | 0.5444 |
| cosine_recall@10 | 0.7444 |
| cosine_ndcg@10 | 0.4349 |
| cosine_mrr@10 | 0.3401 |
| **cosine_map@100** | **0.35** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 806 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 806 samples:
| | positive | anchor |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 206.94 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 21.83 tokens</li><li>max: 41 tokens</li></ul> |
* Samples:
| positive | anchor |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| <code>Les études cliniques évaluant les propriétés thérapeutiques de l’ortie sont nombreuses et portent sur un grand nombre de patients. Les propriétés diurétiques des feuilles ont été étudiées au cours d’au moins cinq essais cliniques portant sur plus de 10 000 patients. Malheureusement, dans la plupart des cas, ces études ne faisaient pas appel à un placebo et leurs résultats, plutôt favorables à cet effet diurétique, sont entachés d’incertitude. De la même manière, la demi-douzaine d’essais cliniques évaluant les effets de la racine d’ortie sur l’adénome de la prostate, et portant sur plus de 16 000 hommes, est de si mauvaise qualité méthodologique qu’il est impossible d’être certain des effets observés. Pour résumer, on peut dire que si les études existantes pointent vers une efficacité de la racine d’ortie, elles n’en apportent pas la preuve scientifique. Une petite étude contre placebo suggère une certaine efficacité des feuilles d’ortie pour soulager les symptômes de la rhinite allergique. Enfin, aucune étude ne justifie son usage pour améliorer l’aspect des cheveux et des ongles, ou pour soulager les douleurs articulaires. ... l’EMA Si l’Agence européenne du médicament considère comme « traditionnellement établi » l’usage des feuilles d’ortie comme « traitement diurétique complémentaire des infections urinaires » et comme « traitement complémentaire des douleurs articulaires et de la séborrhée », elle se refuse à qualifier l’usage de la racine d’ortie. L'EMA considère que les études ne sont pas concluantes et que, l'HBP étant une maladie qui doit être traitée médicalement, la racine d'ortie ne peut pas être considérée comme un traitement traditionnel de cette pathologie. ... l’OMS L’Organisation mondiale de la santé reconnaît comme « cliniquement établi » l’usage de la racine d’ortie « dans le traitement des problèmes d’émission d’urine liés à l’HBP légère à modérée, lorsque l’absence de cancer de la prostate est avérée ». Elle considère comme « traditionnel » l’usage de la racine d’ortie comme « diurétique, et pour soulager les douleurs rhumatismales et la sciatique ». ... la Commission E La Commission E du ministère de la Santé allemand reconnaît l’usage des feuilles d’ortie comme « traitement complémentaire des douleurs articulaires et, en tant que diurétique, comme traitement complémentaire des infections et des calculs urinaires ». Elle admet l’usage de la racine d’ortie « dans le traitement symptomatique des troubles de la prostate mineurs ou modérés, en augmentant le flux urinaire et en diminuant la quantité d’urine restant dans la vessie ». ... l’ESCOP La Coopération scientifique européenne en phytothérapie reconnaît l’usage des feuilles d’ortie comme « traitement symptomatique de l’arthrose et des douleurs articulaires », et comme « diurétique dans les infections urinaires, malgré l’absence de preuves d’efficacité après administration par voie orale ». Elle admet l’usage de la racine d’ortie « dans le traitement symptomatique des problèmes d’émission d’urine liés aux troubles de la prostate, mineurs ou modérés ».</code> | <code>Quelle est l'efficacité de la plante Ortie dioïque?</code> |
| <code>D’après des essais in vitro (dans le tube à essai), l’extrait aqueux de mélisse pourrait inhiber la TSH, une hormone qui stimule la glande thyroïde. Cependant, aucun effet indésirable de type thyroïdien n’a été décrit. Néanmoins, les personnes qui souffrent de maladie de la thyroïde doivent utiliser la mélisse avec prudence.</code> | <code>Quelles sont les contre-indications pour la plante Mélisse?</code> |
| <code> L’absinthe (Artemisia absinthium) pousse dans les zones tempérées d’Europe, d’Amérique et d’Asie. Au moment de la floraison, on en récolte les feuilles et les sommités fleuries pour les faire sécher. Pendant longtemps, la poudre d’absinthe a été mélangée à l’encre des copistes pour protéger les manuscrits contre les attaques des insectes mangeurs de papier. L’absinthe a également servi à la fabrication d’une boisson alcoolisée très populaire au XIXe siècle, dont l’usage abusif provoquait des hallucinations, de la démence et des convulsions. Depuis 2005, les alcools à base d’absinthe sont de nouveau autorisés à la vente à condition de contenir moins de 35 mg de thuyone par litre. En phytothérapie, l’absinthe est utilisée pour ouvrir l’appétit et aider les digestions difficiles. En application locale, elle est proposée pour aider à la cicatrisation des plaies. Les autres usages traditionnels de l’absinthe<br><br><br>L’absinthe a également été utilisée comme sédatif, ainsi que pour faire baisser la fièvre, éliminer les vers intestinaux et lutter contre l’anémie.</code> | <code>Quelle est l'origine et quels sont les usages de la plante Absinthe?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 1e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.9412 | 3 | - | 0.2476 | 0.2681 | 0.2813 | 0.2090 | 0.3067 |
| 1.8824 | 6 | - | 0.3945 | 0.4313 | 0.4189 | 0.3002 | 0.4526 |
| 2.8235 | 9 | - | 0.4301 | 0.4536 | 0.4456 | 0.3390 | 0.4674 |
| 3.1373 | 10 | 6.0243 | - | - | - | - | - |
| **3.7647** | **12** | **-** | **0.4341** | **0.4592** | **0.4485** | **0.35** | **0.4692** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.0
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.33.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CAS",
"ESSAI"
] | BioNLP |
croissantllm/base_30k | croissantllm | text2text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705 | 1,706 | 7 | 0 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
license: mit
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (30k steps)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 30k steps (0.47 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
Coming soon
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/base_30k"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.
He is heading to the market. -> Il va au marché.
We are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
```
| [
"TRANSLATION"
] | [
"CRAFT"
] | Non_BioNLP |
twadada/mv_sw | twadada | null | [
"mteb",
"model-index",
"region:us"
] | 1,736 | 1,736 | 0 | 0 | ---
tags:
- mteb
model-index:
- name: model2vec_result_fixed
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: None
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.7910447761194
- type: ap
value: 33.038020188116036
- type: f1
value: 65.03799728338926
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: None
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 72.47644999999999
- type: ap
value: 66.91002822830875
- type: f1
value: 72.2600863044581
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: None
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 36.012
- type: f1
value: 35.38209336470206
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: None
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 21.124000000000002
- type: map_at_10
value: 34.329
- type: map_at_100
value: 35.612
- type: map_at_1000
value: 35.647
- type: map_at_3
value: 30.263
- type: map_at_5
value: 32.358
- type: mrr_at_1
value: 21.764
- type: mrr_at_10
value: 34.558
- type: mrr_at_100
value: 35.848
- type: mrr_at_1000
value: 35.882999999999996
- type: mrr_at_3
value: 30.441000000000003
- type: mrr_at_5
value: 32.621
- type: ndcg_at_1
value: 21.124000000000002
- type: ndcg_at_10
value: 41.961
- type: ndcg_at_100
value: 47.746
- type: ndcg_at_1000
value: 48.63
- type: ndcg_at_3
value: 33.469
- type: ndcg_at_5
value: 37.261
- type: precision_at_1
value: 21.124000000000002
- type: precision_at_10
value: 6.643000000000001
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.272000000000002
- type: precision_at_5
value: 10.413
- type: recall_at_1
value: 21.124000000000002
- type: recall_at_10
value: 66.43
- type: recall_at_100
value: 92.461
- type: recall_at_1000
value: 99.289
- type: recall_at_3
value: 42.817
- type: recall_at_5
value: 52.063
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: None
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 35.422522812555265
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: None
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 25.271555965391595
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: None
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 54.11180788298141
- type: mrr
value: 68.73587477465594
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: None
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 79.11612347924923
- type: cos_sim_spearman
value: 75.85775256673794
- type: euclidean_pearson
value: 77.46080567383865
- type: euclidean_spearman
value: 75.85775256673794
- type: manhattan_pearson
value: 77.7319143671074
- type: manhattan_spearman
value: 75.98908086034702
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: None
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 72.63636363636363
- type: f1
value: 71.69751597573539
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: None
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 30.861094091770546
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: None
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 20.222365644637257
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: None
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 19.939
- type: map_at_10
value: 26.924
- type: map_at_100
value: 28.16
- type: map_at_1000
value: 28.316999999999997
- type: map_at_3
value: 24.45
- type: map_at_5
value: 25.751
- type: mrr_at_1
value: 25.894000000000002
- type: mrr_at_10
value: 32.652
- type: mrr_at_100
value: 33.584
- type: mrr_at_1000
value: 33.664
- type: mrr_at_3
value: 30.520000000000003
- type: mrr_at_5
value: 31.671
- type: ndcg_at_1
value: 25.894000000000002
- type: ndcg_at_10
value: 31.835
- type: ndcg_at_100
value: 37.325
- type: ndcg_at_1000
value: 40.586
- type: ndcg_at_3
value: 28.143
- type: ndcg_at_5
value: 29.648999999999997
- type: precision_at_1
value: 25.894000000000002
- type: precision_at_10
value: 6.194999999999999
- type: precision_at_100
value: 1.126
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 13.543
- type: precision_at_5
value: 9.757
- type: recall_at_1
value: 19.939
- type: recall_at_10
value: 40.537
- type: recall_at_100
value: 64.717
- type: recall_at_1000
value: 87.01299999999999
- type: recall_at_3
value: 29.301
- type: recall_at_5
value: 33.918
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: None
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 16.601
- type: map_at_10
value: 22.07
- type: map_at_100
value: 22.958000000000002
- type: map_at_1000
value: 23.074
- type: map_at_3
value: 20.137
- type: map_at_5
value: 21.315
- type: mrr_at_1
value: 20.382
- type: mrr_at_10
value: 25.954
- type: mrr_at_100
value: 26.723000000000003
- type: mrr_at_1000
value: 26.791999999999998
- type: mrr_at_3
value: 24.098
- type: mrr_at_5
value: 25.27
- type: ndcg_at_1
value: 20.382
- type: ndcg_at_10
value: 25.734
- type: ndcg_at_100
value: 29.952
- type: ndcg_at_1000
value: 32.618
- type: ndcg_at_3
value: 22.445999999999998
- type: ndcg_at_5
value: 24.162
- type: precision_at_1
value: 20.382
- type: precision_at_10
value: 4.662
- type: precision_at_100
value: 0.8580000000000001
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.446
- type: precision_at_5
value: 7.682
- type: recall_at_1
value: 16.601
- type: recall_at_10
value: 32.882
- type: recall_at_100
value: 51.273
- type: recall_at_1000
value: 69.33200000000001
- type: recall_at_3
value: 23.54
- type: recall_at_5
value: 28.054000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: None
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 25.386999999999997
- type: map_at_10
value: 34.183
- type: map_at_100
value: 35.198
- type: map_at_1000
value: 35.292
- type: map_at_3
value: 31.466
- type: map_at_5
value: 33.037
- type: mrr_at_1
value: 29.404000000000003
- type: mrr_at_10
value: 37.519000000000005
- type: mrr_at_100
value: 38.305
- type: mrr_at_1000
value: 38.365
- type: mrr_at_3
value: 35.152
- type: mrr_at_5
value: 36.531000000000006
- type: ndcg_at_1
value: 29.404000000000003
- type: ndcg_at_10
value: 39.235
- type: ndcg_at_100
value: 44.072
- type: ndcg_at_1000
value: 46.272999999999996
- type: ndcg_at_3
value: 34.292
- type: ndcg_at_5
value: 36.735
- type: precision_at_1
value: 29.404000000000003
- type: precision_at_10
value: 6.539000000000001
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 10.984
- type: recall_at_1
value: 25.386999999999997
- type: recall_at_10
value: 51.256
- type: recall_at_100
value: 73.53699999999999
- type: recall_at_1000
value: 89.522
- type: recall_at_3
value: 37.830999999999996
- type: recall_at_5
value: 43.811
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: None
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 10.832
- type: map_at_10
value: 16.154
- type: map_at_100
value: 16.863
- type: map_at_1000
value: 16.979
- type: map_at_3
value: 14.654
- type: map_at_5
value: 15.634
- type: mrr_at_1
value: 11.751000000000001
- type: mrr_at_10
value: 17.286
- type: mrr_at_100
value: 18.019
- type: mrr_at_1000
value: 18.122
- type: mrr_at_3
value: 15.706000000000001
- type: mrr_at_5
value: 16.774
- type: ndcg_at_1
value: 11.751000000000001
- type: ndcg_at_10
value: 19.197
- type: ndcg_at_100
value: 23.159
- type: ndcg_at_1000
value: 26.453
- type: ndcg_at_3
value: 16.186
- type: ndcg_at_5
value: 17.936
- type: precision_at_1
value: 11.751000000000001
- type: precision_at_10
value: 3.1189999999999998
- type: precision_at_100
value: 0.54
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 7.194000000000001
- type: precision_at_5
value: 5.311
- type: recall_at_1
value: 10.832
- type: recall_at_10
value: 27.472
- type: recall_at_100
value: 46.471000000000004
- type: recall_at_1000
value: 71.91199999999999
- type: recall_at_3
value: 19.417
- type: recall_at_5
value: 23.577
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: None
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 6.019
- type: map_at_10
value: 9.584
- type: map_at_100
value: 10.433
- type: map_at_1000
value: 10.562000000000001
- type: map_at_3
value: 8.351
- type: map_at_5
value: 9.005
- type: mrr_at_1
value: 7.2139999999999995
- type: mrr_at_10
value: 11.62
- type: mrr_at_100
value: 12.469
- type: mrr_at_1000
value: 12.577
- type: mrr_at_3
value: 10.158000000000001
- type: mrr_at_5
value: 10.898
- type: ndcg_at_1
value: 7.2139999999999995
- type: ndcg_at_10
value: 12.145
- type: ndcg_at_100
value: 16.672
- type: ndcg_at_1000
value: 20.342
- type: ndcg_at_3
value: 9.607000000000001
- type: ndcg_at_5
value: 10.712000000000002
- type: precision_at_1
value: 7.2139999999999995
- type: precision_at_10
value: 2.338
- type: precision_at_100
value: 0.5459999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 4.726
- type: precision_at_5
value: 3.5319999999999996
- type: recall_at_1
value: 6.019
- type: recall_at_10
value: 18.102999999999998
- type: recall_at_100
value: 38.482
- type: recall_at_1000
value: 65.436
- type: recall_at_3
value: 11.178
- type: recall_at_5
value: 13.877
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: None
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 16.822
- type: map_at_10
value: 22.476
- type: map_at_100
value: 23.69
- type: map_at_1000
value: 23.827
- type: map_at_3
value: 20.441000000000003
- type: map_at_5
value: 21.512
- type: mrr_at_1
value: 20.788999999999998
- type: mrr_at_10
value: 26.674
- type: mrr_at_100
value: 27.675
- type: mrr_at_1000
value: 27.753
- type: mrr_at_3
value: 24.495
- type: mrr_at_5
value: 25.629999999999995
- type: ndcg_at_1
value: 20.788999999999998
- type: ndcg_at_10
value: 26.667999999999996
- type: ndcg_at_100
value: 32.565
- type: ndcg_at_1000
value: 35.634
- type: ndcg_at_3
value: 22.942
- type: ndcg_at_5
value: 24.514
- type: precision_at_1
value: 20.788999999999998
- type: precision_at_10
value: 4.947
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 10.748000000000001
- type: precision_at_5
value: 7.68
- type: recall_at_1
value: 16.822
- type: recall_at_10
value: 35.237
- type: recall_at_100
value: 61.219
- type: recall_at_1000
value: 82.499
- type: recall_at_3
value: 24.524
- type: recall_at_5
value: 28.787000000000003
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: None
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 12.416
- type: map_at_10
value: 17.684
- type: map_at_100
value: 18.851000000000003
- type: map_at_1000
value: 18.991
- type: map_at_3
value: 15.770999999999999
- type: map_at_5
value: 16.606
- type: mrr_at_1
value: 15.068000000000001
- type: mrr_at_10
value: 21.288
- type: mrr_at_100
value: 22.306
- type: mrr_at_1000
value: 22.396
- type: mrr_at_3
value: 19.273
- type: mrr_at_5
value: 20.398
- type: ndcg_at_1
value: 15.068000000000001
- type: ndcg_at_10
value: 21.66
- type: ndcg_at_100
value: 27.245
- type: ndcg_at_1000
value: 30.591
- type: ndcg_at_3
value: 17.968999999999998
- type: ndcg_at_5
value: 19.352
- type: precision_at_1
value: 15.068000000000001
- type: precision_at_10
value: 4.326
- type: precision_at_100
value: 0.855
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 8.713999999999999
- type: precision_at_5
value: 6.3469999999999995
- type: recall_at_1
value: 12.416
- type: recall_at_10
value: 30.008000000000003
- type: recall_at_100
value: 54.498999999999995
- type: recall_at_1000
value: 78.32000000000001
- type: recall_at_3
value: 19.79
- type: recall_at_5
value: 23.376
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 13.36133333333333
- type: map_at_10
value: 18.6895
- type: map_at_100
value: 19.62275
- type: map_at_1000
value: 19.748833333333334
- type: map_at_3
value: 16.8815
- type: map_at_5
value: 17.84133333333334
- type: mrr_at_1
value: 16.093083333333336
- type: mrr_at_10
value: 21.63225
- type: mrr_at_100
value: 22.477333333333334
- type: mrr_at_1000
value: 22.563166666666664
- type: mrr_at_3
value: 19.83
- type: mrr_at_5
value: 20.799166666666668
- type: ndcg_at_1
value: 16.093083333333336
- type: ndcg_at_10
value: 22.30233333333333
- type: ndcg_at_100
value: 27.000333333333337
- type: ndcg_at_1000
value: 30.14883333333333
- type: ndcg_at_3
value: 18.966499999999996
- type: ndcg_at_5
value: 20.425916666666666
- type: precision_at_1
value: 16.093083333333336
- type: precision_at_10
value: 4.062916666666667
- type: precision_at_100
value: 0.7655833333333333
- type: precision_at_1000
value: 0.12208333333333334
- type: precision_at_3
value: 8.848666666666666
- type: precision_at_5
value: 6.400833333333333
- type: recall_at_1
value: 13.36133333333333
- type: recall_at_10
value: 30.32383333333334
- type: recall_at_100
value: 51.808
- type: recall_at_1000
value: 74.64483333333332
- type: recall_at_3
value: 20.884249999999994
- type: recall_at_5
value: 24.67641666666667
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: None
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 9.722999999999999
- type: map_at_10
value: 14.280999999999999
- type: map_at_100
value: 15.065000000000001
- type: map_at_1000
value: 15.154
- type: map_at_3
value: 13.004
- type: map_at_5
value: 13.626
- type: mrr_at_1
value: 11.81
- type: mrr_at_10
value: 16.384
- type: mrr_at_100
value: 17.189
- type: mrr_at_1000
value: 17.269000000000002
- type: mrr_at_3
value: 15.082
- type: mrr_at_5
value: 15.711
- type: ndcg_at_1
value: 11.81
- type: ndcg_at_10
value: 17.253
- type: ndcg_at_100
value: 21.404
- type: ndcg_at_1000
value: 24.09
- type: ndcg_at_3
value: 14.716999999999999
- type: ndcg_at_5
value: 15.706000000000001
- type: precision_at_1
value: 11.81
- type: precision_at_10
value: 2.9749999999999996
- type: precision_at_100
value: 0.543
- type: precision_at_1000
value: 0.084
- type: precision_at_3
value: 6.902
- type: precision_at_5
value: 4.816
- type: recall_at_1
value: 9.722999999999999
- type: recall_at_10
value: 24.569
- type: recall_at_100
value: 43.997
- type: recall_at_1000
value: 64.44
- type: recall_at_3
value: 17.134
- type: recall_at_5
value: 19.72
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: None
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 7.497
- type: map_at_10
value: 10.846
- type: map_at_100
value: 11.498999999999999
- type: map_at_1000
value: 11.618
- type: map_at_3
value: 9.658999999999999
- type: map_at_5
value: 10.298
- type: mrr_at_1
value: 9.119
- type: mrr_at_10
value: 12.992999999999999
- type: mrr_at_100
value: 13.700999999999999
- type: mrr_at_1000
value: 13.797999999999998
- type: mrr_at_3
value: 11.666
- type: mrr_at_5
value: 12.362
- type: ndcg_at_1
value: 9.119
- type: ndcg_at_10
value: 13.308
- type: ndcg_at_100
value: 16.98
- type: ndcg_at_1000
value: 20.488
- type: ndcg_at_3
value: 10.982
- type: ndcg_at_5
value: 12.003
- type: precision_at_1
value: 9.119
- type: precision_at_10
value: 2.4979999999999998
- type: precision_at_100
value: 0.519
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 5.288
- type: precision_at_5
value: 3.8890000000000002
- type: recall_at_1
value: 7.497
- type: recall_at_10
value: 18.817999999999998
- type: recall_at_100
value: 35.893
- type: recall_at_1000
value: 61.966
- type: recall_at_3
value: 12.199
- type: recall_at_5
value: 14.87
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: None
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 11.856
- type: map_at_10
value: 16.685
- type: map_at_100
value: 17.433
- type: map_at_1000
value: 17.558
- type: map_at_3
value: 15.021
- type: map_at_5
value: 15.931999999999999
- type: mrr_at_1
value: 14.179
- type: mrr_at_10
value: 19.398
- type: mrr_at_100
value: 20.153
- type: mrr_at_1000
value: 20.251
- type: mrr_at_3
value: 17.631
- type: mrr_at_5
value: 18.517
- type: ndcg_at_1
value: 14.179
- type: ndcg_at_10
value: 20.061999999999998
- type: ndcg_at_100
value: 24.149
- type: ndcg_at_1000
value: 27.644999999999996
- type: ndcg_at_3
value: 16.794
- type: ndcg_at_5
value: 18.224
- type: precision_at_1
value: 14.179
- type: precision_at_10
value: 3.582
- type: precision_at_100
value: 0.623
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 7.774
- type: precision_at_5
value: 5.5969999999999995
- type: recall_at_1
value: 11.856
- type: recall_at_10
value: 27.778999999999996
- type: recall_at_100
value: 46.733000000000004
- type: recall_at_1000
value: 72.481
- type: recall_at_3
value: 18.859
- type: recall_at_5
value: 22.435
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: None
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 13.164000000000001
- type: map_at_10
value: 19.317999999999998
- type: map_at_100
value: 20.463
- type: map_at_1000
value: 20.646
- type: map_at_3
value: 17.126
- type: map_at_5
value: 18.056
- type: mrr_at_1
value: 16.601
- type: mrr_at_10
value: 22.62
- type: mrr_at_100
value: 23.601
- type: mrr_at_1000
value: 23.676
- type: mrr_at_3
value: 20.685000000000002
- type: mrr_at_5
value: 21.465999999999998
- type: ndcg_at_1
value: 16.601
- type: ndcg_at_10
value: 23.735999999999997
- type: ndcg_at_100
value: 29.047
- type: ndcg_at_1000
value: 32.323
- type: ndcg_at_3
value: 20.013
- type: ndcg_at_5
value: 21.165
- type: precision_at_1
value: 16.601
- type: precision_at_10
value: 4.7829999999999995
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.197
- type: precision_at_3
value: 9.881
- type: precision_at_5
value: 7.074999999999999
- type: recall_at_1
value: 13.164000000000001
- type: recall_at_10
value: 33.041
- type: recall_at_100
value: 57.907
- type: recall_at_1000
value: 79.887
- type: recall_at_3
value: 21.397
- type: recall_at_5
value: 24.863
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: None
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 10.08
- type: map_at_10
value: 14.069
- type: map_at_100
value: 14.860000000000001
- type: map_at_1000
value: 14.968
- type: map_at_3
value: 12.498
- type: map_at_5
value: 13.324
- type: mrr_at_1
value: 10.906
- type: mrr_at_10
value: 15.198999999999998
- type: mrr_at_100
value: 16.003
- type: mrr_at_1000
value: 16.095000000000002
- type: mrr_at_3
value: 13.494
- type: mrr_at_5
value: 14.362
- type: ndcg_at_1
value: 10.906
- type: ndcg_at_10
value: 16.794999999999998
- type: ndcg_at_100
value: 21.434
- type: ndcg_at_1000
value: 24.743000000000002
- type: ndcg_at_3
value: 13.507
- type: ndcg_at_5
value: 14.953
- type: precision_at_1
value: 10.906
- type: precision_at_10
value: 2.791
- type: precision_at_100
value: 0.5559999999999999
- type: precision_at_1000
value: 0.091
- type: precision_at_3
value: 5.545
- type: precision_at_5
value: 4.14
- type: recall_at_1
value: 10.08
- type: recall_at_10
value: 24.184
- type: recall_at_100
value: 46.967999999999996
- type: recall_at_1000
value: 72.92999999999999
- type: recall_at_3
value: 15.440999999999999
- type: recall_at_5
value: 18.829
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: None
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 6.537
- type: map_at_10
value: 11.465
- type: map_at_100
value: 12.851
- type: map_at_1000
value: 13.045000000000002
- type: map_at_3
value: 9.369
- type: map_at_5
value: 10.331
- type: mrr_at_1
value: 15.244
- type: mrr_at_10
value: 23.593
- type: mrr_at_100
value: 24.772
- type: mrr_at_1000
value: 24.839
- type: mrr_at_3
value: 20.467
- type: mrr_at_5
value: 22.027
- type: ndcg_at_1
value: 15.244
- type: ndcg_at_10
value: 17.288999999999998
- type: ndcg_at_100
value: 23.757
- type: ndcg_at_1000
value: 27.725
- type: ndcg_at_3
value: 13.245000000000001
- type: ndcg_at_5
value: 14.485000000000001
- type: precision_at_1
value: 15.244
- type: precision_at_10
value: 5.733
- type: precision_at_100
value: 1.264
- type: precision_at_1000
value: 0.199
- type: precision_at_3
value: 10.054
- type: precision_at_5
value: 7.9350000000000005
- type: recall_at_1
value: 6.537
- type: recall_at_10
value: 22.046
- type: recall_at_100
value: 44.818000000000005
- type: recall_at_1000
value: 67.676
- type: recall_at_3
value: 12.232
- type: recall_at_5
value: 15.540999999999999
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: None
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 4.304
- type: map_at_10
value: 9.944
- type: map_at_100
value: 14.113000000000001
- type: map_at_1000
value: 15.085
- type: map_at_3
value: 7.228999999999999
- type: map_at_5
value: 8.368
- type: mrr_at_1
value: 43.0
- type: mrr_at_10
value: 53.303999999999995
- type: mrr_at_100
value: 53.979
- type: mrr_at_1000
value: 54.005
- type: mrr_at_3
value: 50.542
- type: mrr_at_5
value: 52.154
- type: ndcg_at_1
value: 31.5
- type: ndcg_at_10
value: 24.235
- type: ndcg_at_100
value: 28.01
- type: ndcg_at_1000
value: 34.724
- type: ndcg_at_3
value: 26.682
- type: ndcg_at_5
value: 25.249
- type: precision_at_1
value: 43.0
- type: precision_at_10
value: 21.65
- type: precision_at_100
value: 6.97
- type: precision_at_1000
value: 1.4449999999999998
- type: precision_at_3
value: 32.25
- type: precision_at_5
value: 27.250000000000004
- type: recall_at_1
value: 4.304
- type: recall_at_10
value: 15.014
- type: recall_at_100
value: 35.115
- type: recall_at_1000
value: 58.52
- type: recall_at_3
value: 8.698
- type: recall_at_5
value: 11.052
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: None
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.09
- type: f1
value: 41.3731018097549
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: None
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 16.349
- type: map_at_10
value: 24.917
- type: map_at_100
value: 26.003
- type: map_at_1000
value: 26.072
- type: map_at_3
value: 22.067999999999998
- type: map_at_5
value: 23.610999999999997
- type: mrr_at_1
value: 17.416999999999998
- type: mrr_at_10
value: 26.44
- type: mrr_at_100
value: 27.509
- type: mrr_at_1000
value: 27.57
- type: mrr_at_3
value: 23.422
- type: mrr_at_5
value: 25.063999999999997
- type: ndcg_at_1
value: 17.416999999999998
- type: ndcg_at_10
value: 30.267
- type: ndcg_at_100
value: 35.650999999999996
- type: ndcg_at_1000
value: 37.57
- type: ndcg_at_3
value: 24.303
- type: ndcg_at_5
value: 27.099
- type: precision_at_1
value: 17.416999999999998
- type: precision_at_10
value: 4.9590000000000005
- type: precision_at_100
value: 0.7799999999999999
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 10.536
- type: precision_at_5
value: 7.807
- type: recall_at_1
value: 16.349
- type: recall_at_10
value: 45.678999999999995
- type: recall_at_100
value: 70.541
- type: recall_at_1000
value: 85.36500000000001
- type: recall_at_3
value: 29.42
- type: recall_at_5
value: 36.112
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: None
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 7.478999999999999
- type: map_at_10
value: 11.933
- type: map_at_100
value: 13.078000000000001
- type: map_at_1000
value: 13.267999999999999
- type: map_at_3
value: 9.975000000000001
- type: map_at_5
value: 10.928
- type: mrr_at_1
value: 14.66
- type: mrr_at_10
value: 20.737
- type: mrr_at_100
value: 21.719
- type: mrr_at_1000
value: 21.809
- type: mrr_at_3
value: 18.57
- type: mrr_at_5
value: 19.558
- type: ndcg_at_1
value: 14.66
- type: ndcg_at_10
value: 16.619
- type: ndcg_at_100
value: 22.467000000000002
- type: ndcg_at_1000
value: 26.745
- type: ndcg_at_3
value: 13.547
- type: ndcg_at_5
value: 14.466999999999999
- type: precision_at_1
value: 14.66
- type: precision_at_10
value: 4.8149999999999995
- type: precision_at_100
value: 1.0619999999999998
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 9.002
- type: precision_at_5
value: 6.79
- type: recall_at_1
value: 7.478999999999999
- type: recall_at_10
value: 21.884
- type: recall_at_100
value: 45.545
- type: recall_at_1000
value: 71.887
- type: recall_at_3
value: 12.485
- type: recall_at_5
value: 15.862000000000002
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: None
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 20.628
- type: map_at_10
value: 28.559
- type: map_at_100
value: 29.5
- type: map_at_1000
value: 29.601
- type: map_at_3
value: 26.429000000000002
- type: map_at_5
value: 27.589000000000002
- type: mrr_at_1
value: 41.256
- type: mrr_at_10
value: 48.842999999999996
- type: mrr_at_100
value: 49.523
- type: mrr_at_1000
value: 49.57
- type: mrr_at_3
value: 46.894000000000005
- type: mrr_at_5
value: 48.024
- type: ndcg_at_1
value: 41.256
- type: ndcg_at_10
value: 36.217
- type: ndcg_at_100
value: 40.422000000000004
- type: ndcg_at_1000
value: 42.762
- type: ndcg_at_3
value: 32.275999999999996
- type: ndcg_at_5
value: 34.184
- type: precision_at_1
value: 41.256
- type: precision_at_10
value: 7.838000000000001
- type: precision_at_100
value: 1.119
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 20.207
- type: precision_at_5
value: 13.636999999999999
- type: recall_at_1
value: 20.628
- type: recall_at_10
value: 39.190000000000005
- type: recall_at_100
value: 55.962
- type: recall_at_1000
value: 71.56700000000001
- type: recall_at_3
value: 30.311
- type: recall_at_5
value: 34.092
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: None
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 70.78
- type: ap
value: 65.09281598781793
- type: f1
value: 70.56498155979408
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: None
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 7.149
- type: map_at_10
value: 12.494
- type: map_at_100
value: 13.438
- type: map_at_1000
value: 13.544
- type: map_at_3
value: 10.58
- type: map_at_5
value: 11.623
- type: mrr_at_1
value: 7.364
- type: mrr_at_10
value: 12.817
- type: mrr_at_100
value: 13.758000000000001
- type: mrr_at_1000
value: 13.861
- type: mrr_at_3
value: 10.879
- type: mrr_at_5
value: 11.942
- type: ndcg_at_1
value: 7.364
- type: ndcg_at_10
value: 15.787999999999998
- type: ndcg_at_100
value: 20.973
- type: ndcg_at_1000
value: 24.156
- type: ndcg_at_3
value: 11.782
- type: ndcg_at_5
value: 13.675
- type: precision_at_1
value: 7.364
- type: precision_at_10
value: 2.702
- type: precision_at_100
value: 0.539
- type: precision_at_1000
value: 0.08099999999999999
- type: precision_at_3
value: 5.148
- type: precision_at_5
value: 4.043
- type: recall_at_1
value: 7.149
- type: recall_at_10
value: 26.039
- type: recall_at_100
value: 51.405
- type: recall_at_1000
value: 76.97500000000001
- type: recall_at_3
value: 14.979000000000001
- type: recall_at_5
value: 19.553
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: None
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.95576835385319
- type: f1
value: 88.06364678376042
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: None
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 56.99726402188783
- type: f1
value: 38.19916053247397
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: None
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.79287155346336
- type: f1
value: 61.634629394462934
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: None
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.30934767989241
- type: f1
value: 68.77914761769519
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: None
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 27.617349409076375
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: None
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 23.802943866708315
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: None
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.431263837648547
- type: mrr
value: 30.205900793315156
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: None
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 3.479
- type: map_at_10
value: 7.603
- type: map_at_100
value: 9.725999999999999
- type: map_at_1000
value: 10.84
- type: map_at_3
value: 5.844
- type: map_at_5
value: 6.732
- type: mrr_at_1
value: 33.745999999999995
- type: mrr_at_10
value: 43.516
- type: mrr_at_100
value: 44.190000000000005
- type: mrr_at_1000
value: 44.248
- type: mrr_at_3
value: 41.744
- type: mrr_at_5
value: 42.828
- type: ndcg_at_1
value: 31.424000000000003
- type: ndcg_at_10
value: 24.267
- type: ndcg_at_100
value: 22.416
- type: ndcg_at_1000
value: 31.165
- type: ndcg_at_3
value: 28.349999999999998
- type: ndcg_at_5
value: 26.596999999999998
- type: precision_at_1
value: 33.745999999999995
- type: precision_at_10
value: 18.173000000000002
- type: precision_at_100
value: 6.142
- type: precision_at_1000
value: 1.856
- type: precision_at_3
value: 27.141
- type: precision_at_5
value: 22.91
- type: recall_at_1
value: 3.479
- type: recall_at_10
value: 10.838000000000001
- type: recall_at_100
value: 23.817
- type: recall_at_1000
value: 54.910000000000004
- type: recall_at_3
value: 7.236
- type: recall_at_5
value: 9.003
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: None
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 8.413
- type: map_at_10
value: 15.137
- type: map_at_100
value: 16.393
- type: map_at_1000
value: 16.492
- type: map_at_3
value: 12.584999999999999
- type: map_at_5
value: 13.963000000000001
- type: mrr_at_1
value: 9.762
- type: mrr_at_10
value: 16.813
- type: mrr_at_100
value: 17.98
- type: mrr_at_1000
value: 18.064
- type: mrr_at_3
value: 14.257
- type: mrr_at_5
value: 15.651000000000002
- type: ndcg_at_1
value: 9.733
- type: ndcg_at_10
value: 19.543
- type: ndcg_at_100
value: 25.965
- type: ndcg_at_1000
value: 28.663
- type: ndcg_at_3
value: 14.308000000000002
- type: ndcg_at_5
value: 16.771
- type: precision_at_1
value: 9.733
- type: precision_at_10
value: 3.7249999999999996
- type: precision_at_100
value: 0.739
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 6.856
- type: precision_at_5
value: 5.475
- type: recall_at_1
value: 8.413
- type: recall_at_10
value: 31.668000000000003
- type: recall_at_100
value: 61.551
- type: recall_at_1000
value: 82.228
- type: recall_at_3
value: 17.669
- type: recall_at_5
value: 23.488999999999997
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: None
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 63.522
- type: map_at_10
value: 76.068
- type: map_at_100
value: 76.858
- type: map_at_1000
value: 76.89099999999999
- type: map_at_3
value: 73.07000000000001
- type: map_at_5
value: 74.883
- type: mrr_at_1
value: 73.11
- type: mrr_at_10
value: 80.134
- type: mrr_at_100
value: 80.403
- type: mrr_at_1000
value: 80.411
- type: mrr_at_3
value: 78.728
- type: mrr_at_5
value: 79.60000000000001
- type: ndcg_at_1
value: 73.1
- type: ndcg_at_10
value: 80.595
- type: ndcg_at_100
value: 82.749
- type: ndcg_at_1000
value: 83.14099999999999
- type: ndcg_at_3
value: 77.021
- type: ndcg_at_5
value: 78.846
- type: precision_at_1
value: 73.1
- type: precision_at_10
value: 12.206999999999999
- type: precision_at_100
value: 1.459
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 33.36
- type: precision_at_5
value: 22.09
- type: recall_at_1
value: 63.522
- type: recall_at_10
value: 89.32600000000001
- type: recall_at_100
value: 97.35000000000001
- type: recall_at_1000
value: 99.613
- type: recall_at_3
value: 79.074
- type: recall_at_5
value: 84.143
- type: map_at_1
value: 3.053
- type: map_at_10
value: 6.912999999999999
- type: map_at_100
value: 8.261000000000001
- type: map_at_1000
value: 8.530999999999999
- type: map_at_3
value: 5.094
- type: map_at_5
value: 5.997
- type: mrr_at_1
value: 15.0
- type: mrr_at_10
value: 22.795
- type: mrr_at_100
value: 24.008
- type: mrr_at_1000
value: 24.099999999999998
- type: mrr_at_3
value: 20.1
- type: mrr_at_5
value: 21.685
- type: ndcg_at_1
value: 15.0
- type: ndcg_at_10
value: 12.386999999999999
- type: ndcg_at_100
value: 18.533
- type: ndcg_at_1000
value: 23.955000000000002
- type: ndcg_at_3
value: 11.75
- type: ndcg_at_5
value: 10.285
- type: precision_at_1
value: 15.0
- type: precision_at_10
value: 6.36
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.28300000000000003
- type: precision_at_3
value: 10.767
- type: precision_at_5
value: 8.9
- type: recall_at_1
value: 3.053
- type: recall_at_10
value: 12.873000000000001
- type: recall_at_100
value: 30.982
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 6.553000000000001
- type: recall_at_5
value: 9.013
- type: map_at_1
value: 0.148
- type: map_at_10
value: 0.971
- type: map_at_100
value: 4.65
- type: map_at_1000
value: 11.509
- type: map_at_3
value: 0.366
- type: map_at_5
value: 0.5599999999999999
- type: mrr_at_1
value: 62.0
- type: mrr_at_10
value: 70.069
- type: mrr_at_100
value: 70.455
- type: mrr_at_1000
value: 70.455
- type: mrr_at_3
value: 68.0
- type: mrr_at_5
value: 69.19999999999999
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 45.729
- type: ndcg_at_100
value: 32.757
- type: ndcg_at_1000
value: 29.631999999999998
- type: ndcg_at_3
value: 50.407999999999994
- type: ndcg_at_5
value: 48.208
- type: precision_at_1
value: 62.0
- type: precision_at_10
value: 47.8
- type: precision_at_100
value: 33.72
- type: precision_at_1000
value: 14.238000000000001
- type: precision_at_3
value: 53.333
- type: precision_at_5
value: 50.8
- type: recall_at_1
value: 0.148
- type: recall_at_10
value: 1.143
- type: recall_at_100
value: 7.219
- type: recall_at_1000
value: 28.294999999999998
- type: recall_at_3
value: 0.392
- type: recall_at_5
value: 0.628
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: None
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 39.546512756347916
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: None
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 47.07923662495948
- task:
type: STS
dataset:
name: MTEB SICK-R
type: None
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 75.6733681207629
- type: cos_sim_spearman
value: 64.67529822790183
- type: euclidean_pearson
value: 69.13481548437119
- type: euclidean_spearman
value: 64.67521597440148
- type: manhattan_pearson
value: 69.01619022585454
- type: manhattan_spearman
value: 64.8728374071917
- task:
type: STS
dataset:
name: MTEB STS12
type: None
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 72.06681953798454
- type: cos_sim_spearman
value: 62.247506425866405
- type: euclidean_pearson
value: 68.05816014766324
- type: euclidean_spearman
value: 62.24902354181767
- type: manhattan_pearson
value: 66.68543187933726
- type: manhattan_spearman
value: 61.438544148098664
- task:
type: STS
dataset:
name: MTEB STS13
type: None
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 76.53983672284885
- type: cos_sim_spearman
value: 77.2760080817994
- type: euclidean_pearson
value: 76.7796065728204
- type: euclidean_spearman
value: 77.27600787572996
- type: manhattan_pearson
value: 76.37651419577129
- type: manhattan_spearman
value: 76.85568457177312
- task:
type: STS
dataset:
name: MTEB STS14
type: None
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 76.2085441120845
- type: cos_sim_spearman
value: 71.91409062241355
- type: euclidean_pearson
value: 74.52730472762947
- type: euclidean_spearman
value: 71.91409512725335
- type: manhattan_pearson
value: 74.53275469819042
- type: manhattan_spearman
value: 71.9720930787841
- task:
type: STS
dataset:
name: MTEB STS15
type: None
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 79.2427339046162
- type: cos_sim_spearman
value: 79.75345017876988
- type: euclidean_pearson
value: 79.31395774152486
- type: euclidean_spearman
value: 79.75345672749796
- type: manhattan_pearson
value: 79.24199253925532
- type: manhattan_spearman
value: 79.64057053536243
- task:
type: STS
dataset:
name: MTEB STS16
type: None
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 75.64452384480809
- type: cos_sim_spearman
value: 76.26343905510407
- type: euclidean_pearson
value: 75.64112078051633
- type: euclidean_spearman
value: 76.26343823222666
- type: manhattan_pearson
value: 75.32718790811802
- type: manhattan_spearman
value: 75.9420892784719
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: None
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.67406953406964
- type: cos_sim_spearman
value: 85.96709815630739
- type: euclidean_pearson
value: 84.71863724469544
- type: euclidean_spearman
value: 85.96709815630739
- type: manhattan_pearson
value: 85.07894738833434
- type: manhattan_spearman
value: 86.57110045700985
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: None
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 59.318066667301615
- type: cos_sim_spearman
value: 63.07956002739231
- type: euclidean_pearson
value: 62.464248268498814
- type: euclidean_spearman
value: 63.07956002739231
- type: manhattan_pearson
value: 62.04813588964373
- type: manhattan_spearman
value: 61.83898606879604
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: None
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 77.25982574948274
- type: cos_sim_spearman
value: 75.4051305973876
- type: euclidean_pearson
value: 77.1987828515963
- type: euclidean_spearman
value: 75.40516069202422
- type: manhattan_pearson
value: 77.04099633595793
- type: manhattan_spearman
value: 75.32222510947251
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: None
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 72.10127087089839
- type: mrr
value: 90.62288020621355
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: None
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 35.5
- type: map_at_10
value: 45.238
- type: map_at_100
value: 46.135999999999996
- type: map_at_1000
value: 46.181
- type: map_at_3
value: 42.329
- type: map_at_5
value: 44.054
- type: mrr_at_1
value: 37.667
- type: mrr_at_10
value: 46.661
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.418
- type: mrr_at_3
value: 43.944
- type: mrr_at_5
value: 45.528
- type: ndcg_at_1
value: 37.667
- type: ndcg_at_10
value: 50.63999999999999
- type: ndcg_at_100
value: 54.885
- type: ndcg_at_1000
value: 56.274
- type: ndcg_at_3
value: 44.891999999999996
- type: ndcg_at_5
value: 47.788000000000004
- type: precision_at_1
value: 37.667
- type: precision_at_10
value: 7.3
- type: precision_at_100
value: 0.97
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 18.333
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 35.5
- type: recall_at_10
value: 66.178
- type: recall_at_100
value: 85.9
- type: recall_at_1000
value: 97.1
- type: recall_at_3
value: 50.306
- type: recall_at_5
value: 57.443999999999996
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: None
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71386138613862
- type: cos_sim_ap
value: 90.20131932554314
- type: cos_sim_f1
value: 84.7749114820435
- type: cos_sim_precision
value: 85.7727737973388
- type: cos_sim_recall
value: 83.8
- type: dot_accuracy
value: 99.71386138613862
- type: dot_ap
value: 90.20131927652947
- type: dot_f1
value: 84.7749114820435
- type: dot_precision
value: 85.7727737973388
- type: dot_recall
value: 83.8
- type: euclidean_accuracy
value: 99.71386138613862
- type: euclidean_ap
value: 90.20131927652946
- type: euclidean_f1
value: 84.7749114820435
- type: euclidean_precision
value: 85.7727737973388
- type: euclidean_recall
value: 83.8
- type: manhattan_accuracy
value: 99.7059405940594
- type: manhattan_ap
value: 90.00682250828238
- type: manhattan_f1
value: 84.44211629125196
- type: manhattan_precision
value: 88.66886688668868
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.71386138613862
- type: max_ap
value: 90.20131932554314
- type: max_f1
value: 84.7749114820435
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: None
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 48.18939518021159
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: None
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.748387331082416
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: None
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 43.24644967679195
- type: mrr
value: 43.66944126135303
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: None
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.88359913790285
- type: cos_sim_spearman
value: 29.20319307230353
- type: dot_pearson
value: 29.883592420103206
- type: dot_spearman
value: 29.228231500970136
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: None
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 1.22
- type: map_at_10
value: 6.635000000000001
- type: map_at_100
value: 10.873
- type: map_at_1000
value: 12.415
- type: map_at_3
value: 2.8240000000000003
- type: map_at_5
value: 4.111
- type: mrr_at_1
value: 14.285999999999998
- type: mrr_at_10
value: 31.857999999999997
- type: mrr_at_100
value: 33.049
- type: mrr_at_1000
value: 33.049
- type: mrr_at_3
value: 25.85
- type: mrr_at_5
value: 29.218
- type: ndcg_at_1
value: 12.245000000000001
- type: ndcg_at_10
value: 18.618000000000002
- type: ndcg_at_100
value: 28.488000000000003
- type: ndcg_at_1000
value: 41.208
- type: ndcg_at_3
value: 15.045
- type: ndcg_at_5
value: 16.359
- type: precision_at_1
value: 14.285999999999998
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 6.5920000000000005
- type: precision_at_1000
value: 1.471
- type: precision_at_3
value: 18.367
- type: precision_at_5
value: 18.776
- type: recall_at_1
value: 1.22
- type: recall_at_10
value: 13.763
- type: recall_at_100
value: 40.107
- type: recall_at_1000
value: 79.06800000000001
- type: recall_at_3
value: 4.2540000000000004
- type: recall_at_5
value: 7.142999999999999
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: None
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.82600000000001
- type: ap
value: 14.59656193783295
- type: f1
value: 55.237720537754875
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: None
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 55.387662705149964
- type: f1
value: 55.62292803889264
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: None
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 33.53590896395144
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: None
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 81.57000655659535
- type: cos_sim_ap
value: 57.187256107173354
- type: cos_sim_f1
value: 54.94480738905159
- type: cos_sim_precision
value: 47.93632075471698
- type: cos_sim_recall
value: 64.35356200527704
- type: dot_accuracy
value: 81.57000655659535
- type: dot_ap
value: 57.187234074371496
- type: dot_f1
value: 54.94480738905159
- type: dot_precision
value: 47.93632075471698
- type: dot_recall
value: 64.35356200527704
- type: euclidean_accuracy
value: 81.57000655659535
- type: euclidean_ap
value: 57.18724422350816
- type: euclidean_f1
value: 54.94480738905159
- type: euclidean_precision
value: 47.93632075471698
- type: euclidean_recall
value: 64.35356200527704
- type: manhattan_accuracy
value: 81.71902008702389
- type: manhattan_ap
value: 57.51605309414705
- type: manhattan_f1
value: 55.16339869281046
- type: manhattan_precision
value: 50.18378378378379
- type: manhattan_recall
value: 61.24010554089709
- type: max_accuracy
value: 81.71902008702389
- type: max_ap
value: 57.51605309414705
- type: max_f1
value: 55.16339869281046
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: None
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.09977878682035
- type: cos_sim_ap
value: 81.948747937846
- type: cos_sim_f1
value: 74.04089724292375
- type: cos_sim_precision
value: 70.7599466704091
- type: cos_sim_recall
value: 77.64089929165382
- type: dot_accuracy
value: 87.09977878682035
- type: dot_ap
value: 81.94874861792225
- type: dot_f1
value: 74.04089724292375
- type: dot_precision
value: 70.7599466704091
- type: dot_recall
value: 77.64089929165382
- type: euclidean_accuracy
value: 87.09977878682035
- type: euclidean_ap
value: 81.94875280390386
- type: euclidean_f1
value: 74.04089724292375
- type: euclidean_precision
value: 70.7599466704091
- type: euclidean_recall
value: 77.64089929165382
- type: manhattan_accuracy
value: 87.19292117825125
- type: manhattan_ap
value: 82.13752985145429
- type: manhattan_f1
value: 74.36426623424485
- type: manhattan_precision
value: 71.32051463311183
- type: manhattan_recall
value: 77.6793963658762
- type: max_accuracy
value: 87.19292117825125
- type: max_ap
value: 82.13752985145429
- type: max_f1
value: 74.36426623424485
---
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
arisylafeta/gte-Qwen2-1.5B-instruct-Q4_0-GGUF | arisylafeta | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,722 | 1,722 | 7 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.511868162026175
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 45.007803189284004
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.20754608934859
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 38.818037697335505
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 39.386760057101945
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.89687154075537
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82153952668092
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.094465801879295
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.65446577183913
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 46.30749237193961
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.581627240203474
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 45.21317724305628
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 42.49825170976724
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 68.23769904483508
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 62.50294403136556
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 44.594104491193555
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 70.55290063940157
- type: v_measure
value: 55.41500719337263
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 28.301882091023288
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 45.26992995191701
- type: v_measure
value: 42.773174876871145
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 71.04138999801822
- type: v_measure
value: 71.7056263158008
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
---
# arisylafeta/gte-Qwen2-1.5B-instruct-Q4_0-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-1.5B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo arisylafeta/gte-Qwen2-1.5B-instruct-Q4_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo arisylafeta/gte-Qwen2-1.5B-instruct-Q4_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo arisylafeta/gte-Qwen2-1.5B-instruct-Q4_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo arisylafeta/gte-Qwen2-1.5B-instruct-Q4_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q4_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
djovak/multi-qa-MiniLM-L6-cos-v1 | djovak | feature-extraction | [
"transformers",
"bert",
"feature-extraction",
"mteb",
"model-index",
"endpoints_compatible",
"region:us"
] | 1,697 | 1,697 | 6 | 0 | ---
tags:
- mteb
model-index:
- name: multi-qa-MiniLM-L6-cos-v1
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 61.791044776119406
- type: ap
value: 25.829130082463124
- type: f1
value: 56.00432262887535
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 62.36077499999999
- type: ap
value: 57.68938427410222
- type: f1
value: 62.247666843818436
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 29.59
- type: f1
value: 29.241975951560622
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.249
- type: map_at_10
value: 40.196
- type: map_at_100
value: 41.336
- type: map_at_1000
value: 41.343
- type: map_at_3
value: 34.934
- type: map_at_5
value: 37.871
- type: mrr_at_1
value: 26.031
- type: mrr_at_10
value: 40.488
- type: mrr_at_100
value: 41.628
- type: mrr_at_1000
value: 41.634
- type: mrr_at_3
value: 35.171
- type: mrr_at_5
value: 38.126
- type: ndcg_at_1
value: 25.249
- type: ndcg_at_10
value: 49.11
- type: ndcg_at_100
value: 53.827999999999996
- type: ndcg_at_1000
value: 53.993
- type: ndcg_at_3
value: 38.175
- type: ndcg_at_5
value: 43.488
- type: precision_at_1
value: 25.249
- type: precision_at_10
value: 7.788
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.861
- type: precision_at_5
value: 12.105
- type: recall_at_1
value: 25.249
- type: recall_at_10
value: 77.881
- type: recall_at_100
value: 98.222
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 47.582
- type: recall_at_5
value: 60.526
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 37.75242616816114
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 27.70031808300247
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.09199068762668
- type: mrr
value: 76.08055225783757
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 80.83007234777145
- type: cos_sim_spearman
value: 79.76446808992547
- type: euclidean_pearson
value: 80.24418669808917
- type: euclidean_spearman
value: 79.76446808992547
- type: manhattan_pearson
value: 79.58896133042379
- type: manhattan_spearman
value: 78.9614377441415
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.6038961038961
- type: f1
value: 77.95572823168757
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 30.240388191413935
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 22.670413424756212
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.694
- type: map_at_10
value: 43.811
- type: map_at_100
value: 45.274
- type: map_at_1000
value: 45.393
- type: map_at_3
value: 40.043
- type: map_at_5
value: 41.983
- type: mrr_at_1
value: 39.628
- type: mrr_at_10
value: 49.748
- type: mrr_at_100
value: 50.356
- type: mrr_at_1000
value: 50.39900000000001
- type: mrr_at_3
value: 46.924
- type: mrr_at_5
value: 48.598
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 50.39
- type: ndcg_at_100
value: 55.489
- type: ndcg_at_1000
value: 57.291000000000004
- type: ndcg_at_3
value: 44.849
- type: ndcg_at_5
value: 47.195
- type: precision_at_1
value: 39.628
- type: precision_at_10
value: 9.714
- type: precision_at_100
value: 1.591
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 21.507
- type: precision_at_5
value: 15.393
- type: recall_at_1
value: 32.694
- type: recall_at_10
value: 63.031000000000006
- type: recall_at_100
value: 84.49
- type: recall_at_1000
value: 96.148
- type: recall_at_3
value: 46.851
- type: recall_at_5
value: 53.64
- type: map_at_1
value: 28.183000000000003
- type: map_at_10
value: 38.796
- type: map_at_100
value: 40.117000000000004
- type: map_at_1000
value: 40.251
- type: map_at_3
value: 35.713
- type: map_at_5
value: 37.446
- type: mrr_at_1
value: 35.605
- type: mrr_at_10
value: 44.824000000000005
- type: mrr_at_100
value: 45.544000000000004
- type: mrr_at_1000
value: 45.59
- type: mrr_at_3
value: 42.452
- type: mrr_at_5
value: 43.891999999999996
- type: ndcg_at_1
value: 35.605
- type: ndcg_at_10
value: 44.857
- type: ndcg_at_100
value: 49.68
- type: ndcg_at_1000
value: 51.841
- type: ndcg_at_3
value: 40.445
- type: ndcg_at_5
value: 42.535000000000004
- type: precision_at_1
value: 35.605
- type: precision_at_10
value: 8.624
- type: precision_at_100
value: 1.438
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 19.808999999999997
- type: precision_at_5
value: 14.191
- type: recall_at_1
value: 28.183000000000003
- type: recall_at_10
value: 55.742000000000004
- type: recall_at_100
value: 76.416
- type: recall_at_1000
value: 90.20899999999999
- type: recall_at_3
value: 42.488
- type: recall_at_5
value: 48.431999999999995
- type: map_at_1
value: 36.156
- type: map_at_10
value: 47.677
- type: map_at_100
value: 48.699999999999996
- type: map_at_1000
value: 48.756
- type: map_at_3
value: 44.467
- type: map_at_5
value: 46.132
- type: mrr_at_1
value: 41.567
- type: mrr_at_10
value: 51.06699999999999
- type: mrr_at_100
value: 51.800000000000004
- type: mrr_at_1000
value: 51.827999999999996
- type: mrr_at_3
value: 48.620999999999995
- type: mrr_at_5
value: 50.013
- type: ndcg_at_1
value: 41.567
- type: ndcg_at_10
value: 53.418
- type: ndcg_at_100
value: 57.743
- type: ndcg_at_1000
value: 58.940000000000005
- type: ndcg_at_3
value: 47.923
- type: ndcg_at_5
value: 50.352
- type: precision_at_1
value: 41.567
- type: precision_at_10
value: 8.74
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 21.337999999999997
- type: precision_at_5
value: 14.646
- type: recall_at_1
value: 36.156
- type: recall_at_10
value: 67.084
- type: recall_at_100
value: 86.299
- type: recall_at_1000
value: 94.82000000000001
- type: recall_at_3
value: 52.209
- type: recall_at_5
value: 58.175
- type: map_at_1
value: 23.513
- type: map_at_10
value: 32.699
- type: map_at_100
value: 33.788000000000004
- type: map_at_1000
value: 33.878
- type: map_at_3
value: 30.044999999999998
- type: map_at_5
value: 31.506
- type: mrr_at_1
value: 25.311
- type: mrr_at_10
value: 34.457
- type: mrr_at_100
value: 35.443999999999996
- type: mrr_at_1000
value: 35.504999999999995
- type: mrr_at_3
value: 31.902
- type: mrr_at_5
value: 33.36
- type: ndcg_at_1
value: 25.311
- type: ndcg_at_10
value: 37.929
- type: ndcg_at_100
value: 43.1
- type: ndcg_at_1000
value: 45.275999999999996
- type: ndcg_at_3
value: 32.745999999999995
- type: ndcg_at_5
value: 35.235
- type: precision_at_1
value: 25.311
- type: precision_at_10
value: 6.034
- type: precision_at_100
value: 0.8959999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 14.237
- type: precision_at_5
value: 10.034
- type: recall_at_1
value: 23.513
- type: recall_at_10
value: 52.312999999999995
- type: recall_at_100
value: 75.762
- type: recall_at_1000
value: 91.85799999999999
- type: recall_at_3
value: 38.222
- type: recall_at_5
value: 44.316
- type: map_at_1
value: 16.333000000000002
- type: map_at_10
value: 24.605
- type: map_at_100
value: 25.924000000000003
- type: map_at_1000
value: 26.039
- type: map_at_3
value: 21.907
- type: map_at_5
value: 23.294999999999998
- type: mrr_at_1
value: 20.647
- type: mrr_at_10
value: 29.442
- type: mrr_at_100
value: 30.54
- type: mrr_at_1000
value: 30.601
- type: mrr_at_3
value: 26.802999999999997
- type: mrr_at_5
value: 28.147
- type: ndcg_at_1
value: 20.647
- type: ndcg_at_10
value: 30.171999999999997
- type: ndcg_at_100
value: 36.466
- type: ndcg_at_1000
value: 39.095
- type: ndcg_at_3
value: 25.134
- type: ndcg_at_5
value: 27.211999999999996
- type: precision_at_1
value: 20.647
- type: precision_at_10
value: 5.659
- type: precision_at_100
value: 1.012
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.148
- type: precision_at_5
value: 8.881
- type: recall_at_1
value: 16.333000000000002
- type: recall_at_10
value: 42.785000000000004
- type: recall_at_100
value: 70.282
- type: recall_at_1000
value: 88.539
- type: recall_at_3
value: 28.307
- type: recall_at_5
value: 33.751
- type: map_at_1
value: 26.821
- type: map_at_10
value: 37.188
- type: map_at_100
value: 38.516
- type: map_at_1000
value: 38.635000000000005
- type: map_at_3
value: 33.821
- type: map_at_5
value: 35.646
- type: mrr_at_1
value: 33.109
- type: mrr_at_10
value: 43.003
- type: mrr_at_100
value: 43.849
- type: mrr_at_1000
value: 43.889
- type: mrr_at_3
value: 40.263
- type: mrr_at_5
value: 41.957
- type: ndcg_at_1
value: 33.109
- type: ndcg_at_10
value: 43.556
- type: ndcg_at_100
value: 49.197
- type: ndcg_at_1000
value: 51.269
- type: ndcg_at_3
value: 38.01
- type: ndcg_at_5
value: 40.647
- type: precision_at_1
value: 33.109
- type: precision_at_10
value: 8.085
- type: precision_at_100
value: 1.286
- type: precision_at_1000
value: 0.166
- type: precision_at_3
value: 18.191
- type: precision_at_5
value: 13.050999999999998
- type: recall_at_1
value: 26.821
- type: recall_at_10
value: 56.818000000000005
- type: recall_at_100
value: 80.63
- type: recall_at_1000
value: 94.042
- type: recall_at_3
value: 41.266000000000005
- type: recall_at_5
value: 48.087999999999994
- type: map_at_1
value: 22.169
- type: map_at_10
value: 31.682
- type: map_at_100
value: 32.988
- type: map_at_1000
value: 33.097
- type: map_at_3
value: 28.708
- type: map_at_5
value: 30.319000000000003
- type: mrr_at_1
value: 27.854
- type: mrr_at_10
value: 36.814
- type: mrr_at_100
value: 37.741
- type: mrr_at_1000
value: 37.798
- type: mrr_at_3
value: 34.418
- type: mrr_at_5
value: 35.742000000000004
- type: ndcg_at_1
value: 27.854
- type: ndcg_at_10
value: 37.388
- type: ndcg_at_100
value: 43.342999999999996
- type: ndcg_at_1000
value: 45.829
- type: ndcg_at_3
value: 32.512
- type: ndcg_at_5
value: 34.613
- type: precision_at_1
value: 27.854
- type: precision_at_10
value: 7.031999999999999
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 15.753
- type: precision_at_5
value: 11.301
- type: recall_at_1
value: 22.169
- type: recall_at_10
value: 49.44
- type: recall_at_100
value: 75.644
- type: recall_at_1000
value: 92.919
- type: recall_at_3
value: 35.528999999999996
- type: recall_at_5
value: 41.271
- type: map_at_1
value: 24.20158333333334
- type: map_at_10
value: 33.509
- type: map_at_100
value: 34.76525
- type: map_at_1000
value: 34.885999999999996
- type: map_at_3
value: 30.594333333333335
- type: map_at_5
value: 32.160666666666664
- type: mrr_at_1
value: 28.803833333333333
- type: mrr_at_10
value: 37.61358333333333
- type: mrr_at_100
value: 38.5105
- type: mrr_at_1000
value: 38.56841666666667
- type: mrr_at_3
value: 35.090666666666664
- type: mrr_at_5
value: 36.49575
- type: ndcg_at_1
value: 28.803833333333333
- type: ndcg_at_10
value: 39.038333333333334
- type: ndcg_at_100
value: 44.49175
- type: ndcg_at_1000
value: 46.835499999999996
- type: ndcg_at_3
value: 34.011916666666664
- type: ndcg_at_5
value: 36.267
- type: precision_at_1
value: 28.803833333333333
- type: precision_at_10
value: 6.974583333333334
- type: precision_at_100
value: 1.1565
- type: precision_at_1000
value: 0.15533333333333332
- type: precision_at_3
value: 15.78025
- type: precision_at_5
value: 11.279583333333333
- type: recall_at_1
value: 24.20158333333334
- type: recall_at_10
value: 51.408
- type: recall_at_100
value: 75.36958333333334
- type: recall_at_1000
value: 91.5765
- type: recall_at_3
value: 37.334500000000006
- type: recall_at_5
value: 43.14666666666667
- type: map_at_1
value: 21.394
- type: map_at_10
value: 28.807
- type: map_at_100
value: 29.851
- type: map_at_1000
value: 29.959999999999997
- type: map_at_3
value: 26.694000000000003
- type: map_at_5
value: 27.805999999999997
- type: mrr_at_1
value: 23.773
- type: mrr_at_10
value: 30.895
- type: mrr_at_100
value: 31.894
- type: mrr_at_1000
value: 31.971
- type: mrr_at_3
value: 28.988000000000003
- type: mrr_at_5
value: 29.908
- type: ndcg_at_1
value: 23.773
- type: ndcg_at_10
value: 32.976
- type: ndcg_at_100
value: 38.109
- type: ndcg_at_1000
value: 40.797
- type: ndcg_at_3
value: 28.993999999999996
- type: ndcg_at_5
value: 30.659999999999997
- type: precision_at_1
value: 23.773
- type: precision_at_10
value: 5.2299999999999995
- type: precision_at_100
value: 0.857
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.73
- type: precision_at_5
value: 8.741999999999999
- type: recall_at_1
value: 21.394
- type: recall_at_10
value: 43.75
- type: recall_at_100
value: 66.765
- type: recall_at_1000
value: 86.483
- type: recall_at_3
value: 32.542
- type: recall_at_5
value: 36.689
- type: map_at_1
value: 16.266
- type: map_at_10
value: 23.639
- type: map_at_100
value: 24.814
- type: map_at_1000
value: 24.948
- type: map_at_3
value: 21.401999999999997
- type: map_at_5
value: 22.581
- type: mrr_at_1
value: 19.718
- type: mrr_at_10
value: 27.276
- type: mrr_at_100
value: 28.252
- type: mrr_at_1000
value: 28.33
- type: mrr_at_3
value: 25.086000000000002
- type: mrr_at_5
value: 26.304
- type: ndcg_at_1
value: 19.718
- type: ndcg_at_10
value: 28.254
- type: ndcg_at_100
value: 34.022999999999996
- type: ndcg_at_1000
value: 37.031
- type: ndcg_at_3
value: 24.206
- type: ndcg_at_5
value: 26.009
- type: precision_at_1
value: 19.718
- type: precision_at_10
value: 5.189
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_3
value: 11.551
- type: precision_at_5
value: 8.362
- type: recall_at_1
value: 16.266
- type: recall_at_10
value: 38.550000000000004
- type: recall_at_100
value: 64.63499999999999
- type: recall_at_1000
value: 86.059
- type: recall_at_3
value: 27.156000000000002
- type: recall_at_5
value: 31.829
- type: map_at_1
value: 26.124000000000002
- type: map_at_10
value: 35.099000000000004
- type: map_at_100
value: 36.269
- type: map_at_1000
value: 36.388999999999996
- type: map_at_3
value: 32.017
- type: map_at_5
value: 33.614
- type: mrr_at_1
value: 31.25
- type: mrr_at_10
value: 39.269999999999996
- type: mrr_at_100
value: 40.134
- type: mrr_at_1000
value: 40.197
- type: mrr_at_3
value: 36.536
- type: mrr_at_5
value: 37.842
- type: ndcg_at_1
value: 31.25
- type: ndcg_at_10
value: 40.643
- type: ndcg_at_100
value: 45.967999999999996
- type: ndcg_at_1000
value: 48.455999999999996
- type: ndcg_at_3
value: 34.954
- type: ndcg_at_5
value: 37.273
- type: precision_at_1
value: 31.25
- type: precision_at_10
value: 6.894
- type: precision_at_100
value: 1.086
- type: precision_at_1000
value: 0.14200000000000002
- type: precision_at_3
value: 15.672
- type: precision_at_5
value: 11.082
- type: recall_at_1
value: 26.124000000000002
- type: recall_at_10
value: 53.730999999999995
- type: recall_at_100
value: 76.779
- type: recall_at_1000
value: 93.908
- type: recall_at_3
value: 37.869
- type: recall_at_5
value: 43.822
- type: map_at_1
value: 21.776
- type: map_at_10
value: 31.384
- type: map_at_100
value: 33.108
- type: map_at_1000
value: 33.339
- type: map_at_3
value: 28.269
- type: map_at_5
value: 30.108
- type: mrr_at_1
value: 26.482
- type: mrr_at_10
value: 35.876000000000005
- type: mrr_at_100
value: 36.887
- type: mrr_at_1000
value: 36.949
- type: mrr_at_3
value: 32.971000000000004
- type: mrr_at_5
value: 34.601
- type: ndcg_at_1
value: 26.482
- type: ndcg_at_10
value: 37.403999999999996
- type: ndcg_at_100
value: 43.722
- type: ndcg_at_1000
value: 46.417
- type: ndcg_at_3
value: 32.149
- type: ndcg_at_5
value: 34.818
- type: precision_at_1
value: 26.482
- type: precision_at_10
value: 7.411
- type: precision_at_100
value: 1.532
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 15.152
- type: precision_at_5
value: 11.501999999999999
- type: recall_at_1
value: 21.776
- type: recall_at_10
value: 49.333
- type: recall_at_100
value: 76.753
- type: recall_at_1000
value: 93.762
- type: recall_at_3
value: 35.329
- type: recall_at_5
value: 41.82
- type: map_at_1
value: 18.990000000000002
- type: map_at_10
value: 26.721
- type: map_at_100
value: 27.833999999999996
- type: map_at_1000
value: 27.947
- type: map_at_3
value: 24.046
- type: map_at_5
value: 25.491999999999997
- type: mrr_at_1
value: 20.702
- type: mrr_at_10
value: 28.691
- type: mrr_at_100
value: 29.685
- type: mrr_at_1000
value: 29.764000000000003
- type: mrr_at_3
value: 26.124000000000002
- type: mrr_at_5
value: 27.584999999999997
- type: ndcg_at_1
value: 20.702
- type: ndcg_at_10
value: 31.473000000000003
- type: ndcg_at_100
value: 37.061
- type: ndcg_at_1000
value: 39.784000000000006
- type: ndcg_at_3
value: 26.221
- type: ndcg_at_5
value: 28.655
- type: precision_at_1
value: 20.702
- type: precision_at_10
value: 5.083
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 11.275
- type: precision_at_5
value: 8.17
- type: recall_at_1
value: 18.990000000000002
- type: recall_at_10
value: 44.318999999999996
- type: recall_at_100
value: 69.98
- type: recall_at_1000
value: 90.171
- type: recall_at_3
value: 30.246000000000002
- type: recall_at_5
value: 35.927
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.584
- type: map_at_10
value: 16.148
- type: map_at_100
value: 17.727
- type: map_at_1000
value: 17.913999999999998
- type: map_at_3
value: 13.456000000000001
- type: map_at_5
value: 14.841999999999999
- type: mrr_at_1
value: 21.564
- type: mrr_at_10
value: 31.579
- type: mrr_at_100
value: 32.586999999999996
- type: mrr_at_1000
value: 32.638
- type: mrr_at_3
value: 28.294999999999998
- type: mrr_at_5
value: 30.064
- type: ndcg_at_1
value: 21.564
- type: ndcg_at_10
value: 23.294999999999998
- type: ndcg_at_100
value: 29.997
- type: ndcg_at_1000
value: 33.517
- type: ndcg_at_3
value: 18.759
- type: ndcg_at_5
value: 20.324
- type: precision_at_1
value: 21.564
- type: precision_at_10
value: 7.362
- type: precision_at_100
value: 1.451
- type: precision_at_1000
value: 0.21
- type: precision_at_3
value: 13.919999999999998
- type: precision_at_5
value: 10.879
- type: recall_at_1
value: 9.584
- type: recall_at_10
value: 28.508
- type: recall_at_100
value: 51.873999999999995
- type: recall_at_1000
value: 71.773
- type: recall_at_3
value: 17.329
- type: recall_at_5
value: 21.823
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.034
- type: map_at_10
value: 14.664
- type: map_at_100
value: 19.652
- type: map_at_1000
value: 20.701
- type: map_at_3
value: 10.626
- type: map_at_5
value: 12.334
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.132
- type: mrr_at_100
value: 63.639
- type: mrr_at_1000
value: 63.663000000000004
- type: mrr_at_3
value: 61.083
- type: mrr_at_5
value: 62.483
- type: ndcg_at_1
value: 42.875
- type: ndcg_at_10
value: 32.04
- type: ndcg_at_100
value: 35.157
- type: ndcg_at_1000
value: 41.4
- type: ndcg_at_3
value: 35.652
- type: ndcg_at_5
value: 33.617000000000004
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 25.55
- type: precision_at_100
value: 7.5600000000000005
- type: precision_at_1000
value: 1.577
- type: precision_at_3
value: 38.833
- type: precision_at_5
value: 33.15
- type: recall_at_1
value: 7.034
- type: recall_at_10
value: 19.627
- type: recall_at_100
value: 40.528
- type: recall_at_1000
value: 60.789
- type: recall_at_3
value: 11.833
- type: recall_at_5
value: 14.804
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 39.6
- type: f1
value: 35.3770765501984
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.098
- type: map_at_10
value: 46.437
- type: map_at_100
value: 47.156
- type: map_at_1000
value: 47.193000000000005
- type: map_at_3
value: 43.702000000000005
- type: map_at_5
value: 45.326
- type: mrr_at_1
value: 37.774
- type: mrr_at_10
value: 49.512
- type: mrr_at_100
value: 50.196
- type: mrr_at_1000
value: 50.224000000000004
- type: mrr_at_3
value: 46.747
- type: mrr_at_5
value: 48.415
- type: ndcg_at_1
value: 37.774
- type: ndcg_at_10
value: 52.629000000000005
- type: ndcg_at_100
value: 55.995
- type: ndcg_at_1000
value: 56.962999999999994
- type: ndcg_at_3
value: 47.188
- type: ndcg_at_5
value: 50.019000000000005
- type: precision_at_1
value: 37.774
- type: precision_at_10
value: 7.541
- type: precision_at_100
value: 0.931
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 19.572
- type: precision_at_5
value: 13.288
- type: recall_at_1
value: 35.098
- type: recall_at_10
value: 68.818
- type: recall_at_100
value: 84.004
- type: recall_at_1000
value: 91.36800000000001
- type: recall_at_3
value: 54.176
- type: recall_at_5
value: 60.968999999999994
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.982
- type: map_at_10
value: 28.994999999999997
- type: map_at_100
value: 30.868000000000002
- type: map_at_1000
value: 31.045
- type: map_at_3
value: 25.081999999999997
- type: map_at_5
value: 27.303
- type: mrr_at_1
value: 35.031
- type: mrr_at_10
value: 43.537
- type: mrr_at_100
value: 44.422
- type: mrr_at_1000
value: 44.471
- type: mrr_at_3
value: 41.024
- type: mrr_at_5
value: 42.42
- type: ndcg_at_1
value: 35.031
- type: ndcg_at_10
value: 36.346000000000004
- type: ndcg_at_100
value: 43.275000000000006
- type: ndcg_at_1000
value: 46.577
- type: ndcg_at_3
value: 32.42
- type: ndcg_at_5
value: 33.841
- type: precision_at_1
value: 35.031
- type: precision_at_10
value: 10.231
- type: precision_at_100
value: 1.728
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 21.553
- type: precision_at_5
value: 16.204
- type: recall_at_1
value: 17.982
- type: recall_at_10
value: 43.169000000000004
- type: recall_at_100
value: 68.812
- type: recall_at_1000
value: 89.008
- type: recall_at_3
value: 29.309
- type: recall_at_5
value: 35.514
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.387
- type: map_at_10
value: 36.931000000000004
- type: map_at_100
value: 37.734
- type: map_at_1000
value: 37.818000000000005
- type: map_at_3
value: 34.691
- type: map_at_5
value: 36.016999999999996
- type: mrr_at_1
value: 54.774
- type: mrr_at_10
value: 62.133
- type: mrr_at_100
value: 62.587
- type: mrr_at_1000
value: 62.61600000000001
- type: mrr_at_3
value: 60.49099999999999
- type: mrr_at_5
value: 61.480999999999995
- type: ndcg_at_1
value: 54.774
- type: ndcg_at_10
value: 45.657
- type: ndcg_at_100
value: 48.954
- type: ndcg_at_1000
value: 50.78
- type: ndcg_at_3
value: 41.808
- type: ndcg_at_5
value: 43.816
- type: precision_at_1
value: 54.774
- type: precision_at_10
value: 9.479
- type: precision_at_100
value: 1.208
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 25.856
- type: precision_at_5
value: 17.102
- type: recall_at_1
value: 27.387
- type: recall_at_10
value: 47.394
- type: recall_at_100
value: 60.397999999999996
- type: recall_at_1000
value: 72.54599999999999
- type: recall_at_3
value: 38.785
- type: recall_at_5
value: 42.754999999999995
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 61.217999999999996
- type: ap
value: 56.84286974948407
- type: f1
value: 60.99211195455131
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 19.224
- type: map_at_10
value: 30.448999999999998
- type: map_at_100
value: 31.663999999999998
- type: map_at_1000
value: 31.721
- type: map_at_3
value: 26.922
- type: map_at_5
value: 28.906
- type: mrr_at_1
value: 19.756
- type: mrr_at_10
value: 30.994
- type: mrr_at_100
value: 32.161
- type: mrr_at_1000
value: 32.213
- type: mrr_at_3
value: 27.502
- type: mrr_at_5
value: 29.48
- type: ndcg_at_1
value: 19.742
- type: ndcg_at_10
value: 36.833
- type: ndcg_at_100
value: 42.785000000000004
- type: ndcg_at_1000
value: 44.291000000000004
- type: ndcg_at_3
value: 29.580000000000002
- type: ndcg_at_5
value: 33.139
- type: precision_at_1
value: 19.742
- type: precision_at_10
value: 5.894
- type: precision_at_100
value: 0.889
- type: precision_at_1000
value: 0.10200000000000001
- type: precision_at_3
value: 12.665000000000001
- type: precision_at_5
value: 9.393
- type: recall_at_1
value: 19.224
- type: recall_at_10
value: 56.538999999999994
- type: recall_at_100
value: 84.237
- type: recall_at_1000
value: 95.965
- type: recall_at_3
value: 36.71
- type: recall_at_5
value: 45.283
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.97264021887824
- type: f1
value: 89.53607318488027
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 59.566803465572285
- type: f1
value: 40.94003955225124
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.7787491593813
- type: f1
value: 64.51190971513093
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.7794216543376
- type: f1
value: 72.71852261076475
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 28.40883054472429
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 26.144338339113617
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.894071459751267
- type: mrr
value: 31.965886150526256
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.024
- type: map_at_10
value: 10.533
- type: map_at_100
value: 12.97
- type: map_at_1000
value: 14.163
- type: map_at_3
value: 7.971
- type: map_at_5
value: 9.15
- type: mrr_at_1
value: 40.867
- type: mrr_at_10
value: 48.837
- type: mrr_at_100
value: 49.464999999999996
- type: mrr_at_1000
value: 49.509
- type: mrr_at_3
value: 46.800999999999995
- type: mrr_at_5
value: 47.745
- type: ndcg_at_1
value: 38.854
- type: ndcg_at_10
value: 29.674
- type: ndcg_at_100
value: 26.66
- type: ndcg_at_1000
value: 35.088
- type: ndcg_at_3
value: 34.838
- type: ndcg_at_5
value: 32.423
- type: precision_at_1
value: 40.248
- type: precision_at_10
value: 21.826999999999998
- type: precision_at_100
value: 6.78
- type: precision_at_1000
value: 1.889
- type: precision_at_3
value: 32.405
- type: precision_at_5
value: 27.74
- type: recall_at_1
value: 5.024
- type: recall_at_10
value: 13.996
- type: recall_at_100
value: 26.636
- type: recall_at_1000
value: 57.816
- type: recall_at_3
value: 9.063
- type: recall_at_5
value: 10.883
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.088
- type: map_at_10
value: 36.915
- type: map_at_100
value: 38.141999999999996
- type: map_at_1000
value: 38.191
- type: map_at_3
value: 32.458999999999996
- type: map_at_5
value: 35.004999999999995
- type: mrr_at_1
value: 26.101000000000003
- type: mrr_at_10
value: 39.1
- type: mrr_at_100
value: 40.071
- type: mrr_at_1000
value: 40.106
- type: mrr_at_3
value: 35.236000000000004
- type: mrr_at_5
value: 37.43
- type: ndcg_at_1
value: 26.072
- type: ndcg_at_10
value: 44.482
- type: ndcg_at_100
value: 49.771
- type: ndcg_at_1000
value: 50.903
- type: ndcg_at_3
value: 35.922
- type: ndcg_at_5
value: 40.178000000000004
- type: precision_at_1
value: 26.072
- type: precision_at_10
value: 7.795000000000001
- type: precision_at_100
value: 1.072
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 16.725
- type: precision_at_5
value: 12.468
- type: recall_at_1
value: 23.088
- type: recall_at_10
value: 65.534
- type: recall_at_100
value: 88.68
- type: recall_at_1000
value: 97.101
- type: recall_at_3
value: 43.161
- type: recall_at_5
value: 52.959999999999994
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.612
- type: map_at_10
value: 83.292
- type: map_at_100
value: 83.96000000000001
- type: map_at_1000
value: 83.978
- type: map_at_3
value: 80.26299999999999
- type: map_at_5
value: 82.11500000000001
- type: mrr_at_1
value: 80.21000000000001
- type: mrr_at_10
value: 86.457
- type: mrr_at_100
value: 86.58500000000001
- type: mrr_at_1000
value: 86.587
- type: mrr_at_3
value: 85.452
- type: mrr_at_5
value: 86.101
- type: ndcg_at_1
value: 80.21000000000001
- type: ndcg_at_10
value: 87.208
- type: ndcg_at_100
value: 88.549
- type: ndcg_at_1000
value: 88.683
- type: ndcg_at_3
value: 84.20400000000001
- type: ndcg_at_5
value: 85.768
- type: precision_at_1
value: 80.21000000000001
- type: precision_at_10
value: 13.29
- type: precision_at_100
value: 1.5230000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.767
- type: precision_at_5
value: 24.2
- type: recall_at_1
value: 69.612
- type: recall_at_10
value: 94.651
- type: recall_at_100
value: 99.297
- type: recall_at_1000
value: 99.95100000000001
- type: recall_at_3
value: 86.003
- type: recall_at_5
value: 90.45100000000001
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.28945925252077
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 50.954446620859684
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.888
- type: map_at_10
value: 9.21
- type: map_at_100
value: 10.629
- type: map_at_1000
value: 10.859
- type: map_at_3
value: 6.743
- type: map_at_5
value: 7.982
- type: mrr_at_1
value: 19.1
- type: mrr_at_10
value: 28.294000000000004
- type: mrr_at_100
value: 29.326999999999998
- type: mrr_at_1000
value: 29.414
- type: mrr_at_3
value: 25.367
- type: mrr_at_5
value: 27.002
- type: ndcg_at_1
value: 19.1
- type: ndcg_at_10
value: 15.78
- type: ndcg_at_100
value: 21.807000000000002
- type: ndcg_at_1000
value: 26.593
- type: ndcg_at_3
value: 15.204999999999998
- type: ndcg_at_5
value: 13.217
- type: precision_at_1
value: 19.1
- type: precision_at_10
value: 7.9799999999999995
- type: precision_at_100
value: 1.667
- type: precision_at_1000
value: 0.28300000000000003
- type: precision_at_3
value: 13.933000000000002
- type: precision_at_5
value: 11.379999999999999
- type: recall_at_1
value: 3.888
- type: recall_at_10
value: 16.17
- type: recall_at_100
value: 33.848
- type: recall_at_1000
value: 57.345
- type: recall_at_3
value: 8.468
- type: recall_at_5
value: 11.540000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 79.05803116288386
- type: cos_sim_spearman
value: 70.0403855402571
- type: euclidean_pearson
value: 75.59006280166072
- type: euclidean_spearman
value: 70.04038926247613
- type: manhattan_pearson
value: 75.48136278078455
- type: manhattan_spearman
value: 69.9608897701754
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 68.56836430603597
- type: cos_sim_spearman
value: 64.38407759822387
- type: euclidean_pearson
value: 65.93619045541732
- type: euclidean_spearman
value: 64.38184049884836
- type: manhattan_pearson
value: 65.97148637646873
- type: manhattan_spearman
value: 64.48011982438929
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 75.990362280318
- type: cos_sim_spearman
value: 76.40621890996734
- type: euclidean_pearson
value: 76.01739766577184
- type: euclidean_spearman
value: 76.4062736496846
- type: manhattan_pearson
value: 76.04738378838042
- type: manhattan_spearman
value: 76.44991409719592
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 74.8516957692617
- type: cos_sim_spearman
value: 69.325199098278
- type: euclidean_pearson
value: 73.37922793254768
- type: euclidean_spearman
value: 69.32520119670215
- type: manhattan_pearson
value: 73.3795212376615
- type: manhattan_spearman
value: 69.35306787926315
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 78.644002190612
- type: cos_sim_spearman
value: 80.18337978181648
- type: euclidean_pearson
value: 79.7628642371887
- type: euclidean_spearman
value: 80.18337906907526
- type: manhattan_pearson
value: 79.68810722704522
- type: manhattan_spearman
value: 80.10664518173466
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 77.8303940874723
- type: cos_sim_spearman
value: 79.56812599677549
- type: euclidean_pearson
value: 79.38928950396344
- type: euclidean_spearman
value: 79.56812556750812
- type: manhattan_pearson
value: 79.41057583507681
- type: manhattan_spearman
value: 79.57604428731142
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 78.90792116013353
- type: cos_sim_spearman
value: 81.18059230233499
- type: euclidean_pearson
value: 80.2622631297375
- type: euclidean_spearman
value: 81.18059230233499
- type: manhattan_pearson
value: 80.23946026135997
- type: manhattan_spearman
value: 81.11947325071426
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.46850619973324
- type: cos_sim_spearman
value: 65.50839374141563
- type: euclidean_pearson
value: 66.60130812260707
- type: euclidean_spearman
value: 65.50839374141563
- type: manhattan_pearson
value: 66.58871918195092
- type: manhattan_spearman
value: 65.7347325297592
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 75.71536124107834
- type: cos_sim_spearman
value: 75.98365906208434
- type: euclidean_pearson
value: 76.64573753881218
- type: euclidean_spearman
value: 75.98365906208434
- type: manhattan_pearson
value: 76.63637189172626
- type: manhattan_spearman
value: 75.9660207821009
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 74.27669440147513
- type: mrr
value: 91.7729356699945
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 41.028
- type: map_at_10
value: 49.919000000000004
- type: map_at_100
value: 50.91
- type: map_at_1000
value: 50.955
- type: map_at_3
value: 47.785
- type: map_at_5
value: 49.084
- type: mrr_at_1
value: 43.667
- type: mrr_at_10
value: 51.342
- type: mrr_at_100
value: 52.197
- type: mrr_at_1000
value: 52.236000000000004
- type: mrr_at_3
value: 49.667
- type: mrr_at_5
value: 50.766999999999996
- type: ndcg_at_1
value: 43.667
- type: ndcg_at_10
value: 54.029
- type: ndcg_at_100
value: 58.909
- type: ndcg_at_1000
value: 60.131
- type: ndcg_at_3
value: 50.444
- type: ndcg_at_5
value: 52.354
- type: precision_at_1
value: 43.667
- type: precision_at_10
value: 7.432999999999999
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 20.444000000000003
- type: precision_at_5
value: 13.533000000000001
- type: recall_at_1
value: 41.028
- type: recall_at_10
value: 65.011
- type: recall_at_100
value: 88.033
- type: recall_at_1000
value: 97.667
- type: recall_at_3
value: 55.394
- type: recall_at_5
value: 60.183
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.76534653465346
- type: cos_sim_ap
value: 93.83756773536699
- type: cos_sim_f1
value: 87.91097622660598
- type: cos_sim_precision
value: 88.94575230296827
- type: cos_sim_recall
value: 86.9
- type: dot_accuracy
value: 99.76534653465346
- type: dot_ap
value: 93.83756773536699
- type: dot_f1
value: 87.91097622660598
- type: dot_precision
value: 88.94575230296827
- type: dot_recall
value: 86.9
- type: euclidean_accuracy
value: 99.76534653465346
- type: euclidean_ap
value: 93.837567735367
- type: euclidean_f1
value: 87.91097622660598
- type: euclidean_precision
value: 88.94575230296827
- type: euclidean_recall
value: 86.9
- type: manhattan_accuracy
value: 99.76633663366337
- type: manhattan_ap
value: 93.84480825492724
- type: manhattan_f1
value: 87.97145769622833
- type: manhattan_precision
value: 89.70893970893971
- type: manhattan_recall
value: 86.3
- type: max_accuracy
value: 99.76633663366337
- type: max_ap
value: 93.84480825492724
- type: max_f1
value: 87.97145769622833
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 48.078155553339585
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.34857297824906
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.06219491505384
- type: mrr
value: 50.77479097699686
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.48401937651373
- type: cos_sim_spearman
value: 31.048654273022606
- type: dot_pearson
value: 30.484020082707847
- type: dot_spearman
value: 31.048654273022606
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.183
- type: map_at_10
value: 1.32
- type: map_at_100
value: 7.01
- type: map_at_1000
value: 16.957
- type: map_at_3
value: 0.481
- type: map_at_5
value: 0.737
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 78.7
- type: mrr_at_100
value: 78.7
- type: mrr_at_1000
value: 78.7
- type: mrr_at_3
value: 76.0
- type: mrr_at_5
value: 78.7
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 55.846
- type: ndcg_at_100
value: 43.138
- type: ndcg_at_1000
value: 39.4
- type: ndcg_at_3
value: 57.306999999999995
- type: ndcg_at_5
value: 57.294
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 60.0
- type: precision_at_100
value: 44.6
- type: precision_at_1000
value: 17.8
- type: precision_at_3
value: 62.0
- type: precision_at_5
value: 62.0
- type: recall_at_1
value: 0.183
- type: recall_at_10
value: 1.583
- type: recall_at_100
value: 10.412
- type: recall_at_1000
value: 37.358999999999995
- type: recall_at_3
value: 0.516
- type: recall_at_5
value: 0.845
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.7420000000000002
- type: map_at_10
value: 6.4879999999999995
- type: map_at_100
value: 11.654
- type: map_at_1000
value: 13.23
- type: map_at_3
value: 3.148
- type: map_at_5
value: 4.825
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 30.258000000000003
- type: mrr_at_100
value: 31.570999999999998
- type: mrr_at_1000
value: 31.594
- type: mrr_at_3
value: 26.19
- type: mrr_at_5
value: 28.027
- type: ndcg_at_1
value: 15.306000000000001
- type: ndcg_at_10
value: 15.608
- type: ndcg_at_100
value: 28.808
- type: ndcg_at_1000
value: 41.603
- type: ndcg_at_3
value: 13.357
- type: ndcg_at_5
value: 15.306000000000001
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 15.101999999999999
- type: precision_at_100
value: 6.49
- type: precision_at_1000
value: 1.488
- type: precision_at_3
value: 14.966
- type: precision_at_5
value: 17.143
- type: recall_at_1
value: 1.7420000000000002
- type: recall_at_10
value: 12.267
- type: recall_at_100
value: 41.105999999999995
- type: recall_at_1000
value: 80.569
- type: recall_at_3
value: 4.009
- type: recall_at_5
value: 7.417999999999999
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 65.1178
- type: ap
value: 11.974961582206614
- type: f1
value: 50.24491996814835
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 51.63271080928127
- type: f1
value: 51.81589904316042
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 40.791709673552276
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.05418131966383
- type: cos_sim_ap
value: 64.72353098186304
- type: cos_sim_f1
value: 61.313330054107226
- type: cos_sim_precision
value: 57.415937356057114
- type: cos_sim_recall
value: 65.77836411609499
- type: dot_accuracy
value: 83.05418131966383
- type: dot_ap
value: 64.72352701424393
- type: dot_f1
value: 61.313330054107226
- type: dot_precision
value: 57.415937356057114
- type: dot_recall
value: 65.77836411609499
- type: euclidean_accuracy
value: 83.05418131966383
- type: euclidean_ap
value: 64.72353124585976
- type: euclidean_f1
value: 61.313330054107226
- type: euclidean_precision
value: 57.415937356057114
- type: euclidean_recall
value: 65.77836411609499
- type: manhattan_accuracy
value: 82.98861536627525
- type: manhattan_ap
value: 64.53981837182303
- type: manhattan_f1
value: 60.94911377930246
- type: manhattan_precision
value: 53.784056508577194
- type: manhattan_recall
value: 70.31662269129288
- type: max_accuracy
value: 83.05418131966383
- type: max_ap
value: 64.72353124585976
- type: max_f1
value: 61.313330054107226
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.06225016493966
- type: cos_sim_ap
value: 84.00829172423475
- type: cos_sim_f1
value: 76.1288446157202
- type: cos_sim_precision
value: 72.11737153877945
- type: cos_sim_recall
value: 80.61287342161995
- type: dot_accuracy
value: 88.06225016493966
- type: dot_ap
value: 84.00827913374181
- type: dot_f1
value: 76.1288446157202
- type: dot_precision
value: 72.11737153877945
- type: dot_recall
value: 80.61287342161995
- type: euclidean_accuracy
value: 88.06225016493966
- type: euclidean_ap
value: 84.00827099295034
- type: euclidean_f1
value: 76.1288446157202
- type: euclidean_precision
value: 72.11737153877945
- type: euclidean_recall
value: 80.61287342161995
- type: manhattan_accuracy
value: 88.05642876547523
- type: manhattan_ap
value: 83.9157542691417
- type: manhattan_f1
value: 76.09045667447307
- type: manhattan_precision
value: 72.50348675034869
- type: manhattan_recall
value: 80.05081613797351
- type: max_accuracy
value: 88.06225016493966
- type: max_ap
value: 84.00829172423475
- type: max_f1
value: 76.1288446157202
---
MTEB evaluation results on English language for 'multi-qa-MiniLM-L6-cos-v1' sbert model
Model and licence can be found [here](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1)
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
twadada/nmc-cls-50_correct | twadada | null | [
"mteb",
"model-index",
"region:us"
] | 1,726 | 1,726 | 0 | 0 | ---
tags:
- mteb
model-index:
- name: nomic_classification_50
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: None
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 67.26865671641791
- type: ap
value: 30.002473367582354
- type: f1
value: 61.1971953752801
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: None
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 64.285825
- type: ap
value: 59.48909573055728
- type: f1
value: 63.9870581887586
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: None
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 32.094
- type: f1
value: 31.58604218365913
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: None
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 13.016
- type: map_at_10
value: 22.521
- type: map_at_100
value: 23.799
- type: map_at_1000
value: 23.883
- type: map_at_3
value: 19.381
- type: map_at_5
value: 20.928
- type: mrr_at_1
value: 13.442000000000002
- type: mrr_at_10
value: 22.667
- type: mrr_at_100
value: 23.944
- type: mrr_at_1000
value: 24.029
- type: mrr_at_3
value: 19.523
- type: mrr_at_5
value: 21.102
- type: ndcg_at_1
value: 13.016
- type: ndcg_at_10
value: 28.059
- type: ndcg_at_100
value: 34.812
- type: ndcg_at_1000
value: 37.074
- type: ndcg_at_3
value: 21.438
- type: ndcg_at_5
value: 24.238
- type: precision_at_1
value: 13.016
- type: precision_at_10
value: 4.595
- type: precision_at_100
value: 0.787
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 9.128
- type: precision_at_5
value: 6.842
- type: recall_at_1
value: 13.016
- type: recall_at_10
value: 45.946
- type: recall_at_100
value: 78.73400000000001
- type: recall_at_1000
value: 96.515
- type: recall_at_3
value: 27.383000000000003
- type: recall_at_5
value: 34.211000000000006
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: None
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 25.72708581045921
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: None
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 17.273102202229808
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: None
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 48.99875215426555
- type: mrr
value: 60.91731521786923
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: None
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 67.57739420865997
- type: cos_sim_spearman
value: 68.8491591362424
- type: euclidean_pearson
value: 67.94540320514962
- type: euclidean_spearman
value: 68.8491591362424
- type: manhattan_pearson
value: 65.69150432274179
- type: manhattan_spearman
value: 66.33223431652344
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: None
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 57.698051948051955
- type: f1
value: 56.00046616188858
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: None
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 24.472330529075432
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: None
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 15.20312280133779
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: None
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 11.856
- type: map_at_10
value: 15.922
- type: map_at_100
value: 16.692999999999998
- type: map_at_1000
value: 16.844
- type: map_at_3
value: 14.233
- type: map_at_5
value: 15.315999999999999
- type: mrr_at_1
value: 14.449000000000002
- type: mrr_at_10
value: 19.359
- type: mrr_at_100
value: 20.095
- type: mrr_at_1000
value: 20.194000000000003
- type: mrr_at_3
value: 17.501
- type: mrr_at_5
value: 18.66
- type: ndcg_at_1
value: 14.449000000000002
- type: ndcg_at_10
value: 19.192999999999998
- type: ndcg_at_100
value: 23.237
- type: ndcg_at_1000
value: 27.032
- type: ndcg_at_3
value: 16.265
- type: ndcg_at_5
value: 17.863
- type: precision_at_1
value: 14.449000000000002
- type: precision_at_10
value: 3.662
- type: precision_at_100
value: 0.718
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 7.630000000000001
- type: precision_at_5
value: 5.866
- type: recall_at_1
value: 11.856
- type: recall_at_10
value: 25.694
- type: recall_at_100
value: 44.003
- type: recall_at_1000
value: 71.039
- type: recall_at_3
value: 17.136000000000003
- type: recall_at_5
value: 21.393
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: None
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 8.758000000000001
- type: map_at_10
value: 12.205
- type: map_at_100
value: 12.859000000000002
- type: map_at_1000
value: 12.967
- type: map_at_3
value: 11.196
- type: map_at_5
value: 11.676
- type: mrr_at_1
value: 11.21
- type: mrr_at_10
value: 15.062000000000001
- type: mrr_at_100
value: 15.720999999999998
- type: mrr_at_1000
value: 15.803
- type: mrr_at_3
value: 13.896
- type: mrr_at_5
value: 14.456
- type: ndcg_at_1
value: 11.21
- type: ndcg_at_10
value: 14.64
- type: ndcg_at_100
value: 18.163
- type: ndcg_at_1000
value: 21.15
- type: ndcg_at_3
value: 12.838
- type: ndcg_at_5
value: 13.475000000000001
- type: precision_at_1
value: 11.21
- type: precision_at_10
value: 2.79
- type: precision_at_100
value: 0.575
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 6.306000000000001
- type: precision_at_5
value: 4.369
- type: recall_at_1
value: 8.758000000000001
- type: recall_at_10
value: 19.213
- type: recall_at_100
value: 35.434
- type: recall_at_1000
value: 56.720000000000006
- type: recall_at_3
value: 13.758999999999999
- type: recall_at_5
value: 15.618000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: None
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 11.655999999999999
- type: map_at_10
value: 15.429
- type: map_at_100
value: 16.223000000000003
- type: map_at_1000
value: 16.334
- type: map_at_3
value: 14.069999999999999
- type: map_at_5
value: 14.815000000000001
- type: mrr_at_1
value: 13.48
- type: mrr_at_10
value: 17.421
- type: mrr_at_100
value: 18.195
- type: mrr_at_1000
value: 18.285
- type: mrr_at_3
value: 15.967
- type: mrr_at_5
value: 16.75
- type: ndcg_at_1
value: 13.48
- type: ndcg_at_10
value: 18.053
- type: ndcg_at_100
value: 22.471
- type: ndcg_at_1000
value: 25.689
- type: ndcg_at_3
value: 15.290000000000001
- type: ndcg_at_5
value: 16.536
- type: precision_at_1
value: 13.48
- type: precision_at_10
value: 2.991
- type: precision_at_100
value: 0.586
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 6.729
- type: precision_at_5
value: 4.853
- type: recall_at_1
value: 11.655999999999999
- type: recall_at_10
value: 24.329
- type: recall_at_100
value: 45.178000000000004
- type: recall_at_1000
value: 69.83200000000001
- type: recall_at_3
value: 16.692
- type: recall_at_5
value: 19.767000000000003
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: None
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 3.672
- type: map_at_10
value: 5.507
- type: map_at_100
value: 5.853
- type: map_at_1000
value: 5.9319999999999995
- type: map_at_3
value: 4.648
- type: map_at_5
value: 5.087
- type: mrr_at_1
value: 4.0680000000000005
- type: mrr_at_10
value: 6.03
- type: mrr_at_100
value: 6.404999999999999
- type: mrr_at_1000
value: 6.485
- type: mrr_at_3
value: 5.16
- type: mrr_at_5
value: 5.595
- type: ndcg_at_1
value: 4.0680000000000005
- type: ndcg_at_10
value: 6.955
- type: ndcg_at_100
value: 9.059000000000001
- type: ndcg_at_1000
value: 11.916
- type: ndcg_at_3
value: 5.137
- type: ndcg_at_5
value: 5.912
- type: precision_at_1
value: 4.0680000000000005
- type: precision_at_10
value: 1.232
- type: precision_at_100
value: 0.246
- type: precision_at_1000
value: 0.053
- type: precision_at_3
value: 2.26
- type: precision_at_5
value: 1.763
- type: recall_at_1
value: 3.672
- type: recall_at_10
value: 11.149000000000001
- type: recall_at_100
value: 21.564
- type: recall_at_1000
value: 44.851
- type: recall_at_3
value: 6.008
- type: recall_at_5
value: 7.91
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: None
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 2.308
- type: map_at_10
value: 3.431
- type: map_at_100
value: 3.8890000000000002
- type: map_at_1000
value: 3.988
- type: map_at_3
value: 2.896
- type: map_at_5
value: 3.182
- type: mrr_at_1
value: 2.9850000000000003
- type: mrr_at_10
value: 4.4110000000000005
- type: mrr_at_100
value: 4.925
- type: mrr_at_1000
value: 5.022
- type: mrr_at_3
value: 3.669
- type: mrr_at_5
value: 4.086
- type: ndcg_at_1
value: 2.9850000000000003
- type: ndcg_at_10
value: 4.463
- type: ndcg_at_100
value: 7.03
- type: ndcg_at_1000
value: 10.358
- type: ndcg_at_3
value: 3.3529999999999998
- type: ndcg_at_5
value: 3.866
- type: precision_at_1
value: 2.9850000000000003
- type: precision_at_10
value: 0.9079999999999999
- type: precision_at_100
value: 0.26
- type: precision_at_1000
value: 0.065
- type: precision_at_3
value: 1.575
- type: precision_at_5
value: 1.318
- type: recall_at_1
value: 2.308
- type: recall_at_10
value: 6.776999999999999
- type: recall_at_100
value: 18.618000000000002
- type: recall_at_1000
value: 44.175
- type: recall_at_3
value: 3.687
- type: recall_at_5
value: 4.948
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: None
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 6.862
- type: map_at_10
value: 9.94
- type: map_at_100
value: 10.624
- type: map_at_1000
value: 10.742
- type: map_at_3
value: 8.690000000000001
- type: map_at_5
value: 9.372
- type: mrr_at_1
value: 8.469999999999999
- type: mrr_at_10
value: 12.328999999999999
- type: mrr_at_100
value: 13.035
- type: mrr_at_1000
value: 13.123999999999999
- type: mrr_at_3
value: 10.828
- type: mrr_at_5
value: 11.752
- type: ndcg_at_1
value: 8.469999999999999
- type: ndcg_at_10
value: 12.377
- type: ndcg_at_100
value: 16.151
- type: ndcg_at_1000
value: 19.580000000000002
- type: ndcg_at_3
value: 9.964
- type: ndcg_at_5
value: 11.137
- type: precision_at_1
value: 8.469999999999999
- type: precision_at_10
value: 2.4250000000000003
- type: precision_at_100
value: 0.5479999999999999
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 4.812
- type: precision_at_5
value: 3.7539999999999996
- type: recall_at_1
value: 6.862
- type: recall_at_10
value: 17.59
- type: recall_at_100
value: 34.557
- type: recall_at_1000
value: 59.78099999999999
- type: recall_at_3
value: 10.838000000000001
- type: recall_at_5
value: 13.8
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: None
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 4.569
- type: map_at_10
value: 6.9190000000000005
- type: map_at_100
value: 7.435
- type: map_at_1000
value: 7.553999999999999
- type: map_at_3
value: 6.0409999999999995
- type: map_at_5
value: 6.4159999999999995
- type: mrr_at_1
value: 5.822
- type: mrr_at_10
value: 8.639
- type: mrr_at_100
value: 9.195
- type: mrr_at_1000
value: 9.292
- type: mrr_at_3
value: 7.571999999999999
- type: mrr_at_5
value: 8.04
- type: ndcg_at_1
value: 5.822
- type: ndcg_at_10
value: 8.808
- type: ndcg_at_100
value: 11.846
- type: ndcg_at_1000
value: 15.476
- type: ndcg_at_3
value: 6.995
- type: ndcg_at_5
value: 7.5920000000000005
- type: precision_at_1
value: 5.822
- type: precision_at_10
value: 1.7469999999999999
- type: precision_at_100
value: 0.398
- type: precision_at_1000
value: 0.08800000000000001
- type: precision_at_3
value: 3.4250000000000003
- type: precision_at_5
value: 2.489
- type: recall_at_1
value: 4.569
- type: recall_at_10
value: 13.035
- type: recall_at_100
value: 27.102999999999998
- type: recall_at_1000
value: 54.137
- type: recall_at_3
value: 7.839
- type: recall_at_5
value: 9.469
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 6.289666666666665
- type: map_at_10
value: 8.76325
- type: map_at_100
value: 9.314083333333333
- type: map_at_1000
value: 9.419
- type: map_at_3
value: 7.856916666666668
- type: map_at_5
value: 8.359333333333334
- type: mrr_at_1
value: 7.752333333333332
- type: mrr_at_10
value: 10.620333333333333
- type: mrr_at_100
value: 11.191083333333333
- type: mrr_at_1000
value: 11.2795
- type: mrr_at_3
value: 9.572916666666668
- type: mrr_at_5
value: 10.152499999999998
- type: ndcg_at_1
value: 7.752333333333332
- type: ndcg_at_10
value: 10.657000000000002
- type: ndcg_at_100
value: 13.755166666666666
- type: ndcg_at_1000
value: 16.9275
- type: ndcg_at_3
value: 8.904916666666665
- type: ndcg_at_5
value: 9.709083333333334
- type: precision_at_1
value: 7.752333333333332
- type: precision_at_10
value: 1.969166666666667
- type: precision_at_100
value: 0.42624999999999996
- type: precision_at_1000
value: 0.08475000000000002
- type: precision_at_3
value: 4.182
- type: precision_at_5
value: 3.0942499999999997
- type: recall_at_1
value: 6.289666666666665
- type: recall_at_10
value: 14.695083333333333
- type: recall_at_100
value: 29.238666666666663
- type: recall_at_1000
value: 53.20016666666667
- type: recall_at_3
value: 9.667
- type: recall_at_5
value: 11.766416666666666
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: None
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 5.188000000000001
- type: map_at_10
value: 6.97
- type: map_at_100
value: 7.380000000000001
- type: map_at_1000
value: 7.446999999999999
- type: map_at_3
value: 6.357
- type: map_at_5
value: 6.736000000000001
- type: mrr_at_1
value: 6.748
- type: mrr_at_10
value: 8.885
- type: mrr_at_100
value: 9.285
- type: mrr_at_1000
value: 9.353
- type: mrr_at_3
value: 8.206
- type: mrr_at_5
value: 8.689
- type: ndcg_at_1
value: 6.748
- type: ndcg_at_10
value: 8.394
- type: ndcg_at_100
value: 10.554
- type: ndcg_at_1000
value: 12.786
- type: ndcg_at_3
value: 7.227
- type: ndcg_at_5
value: 7.878
- type: precision_at_1
value: 6.748
- type: precision_at_10
value: 1.442
- type: precision_at_100
value: 0.27799999999999997
- type: precision_at_1000
value: 0.052
- type: precision_at_3
value: 3.3230000000000004
- type: precision_at_5
value: 2.4539999999999997
- type: recall_at_1
value: 5.188000000000001
- type: recall_at_10
value: 11.109
- type: recall_at_100
value: 21.134
- type: recall_at_1000
value: 38.686
- type: recall_at_3
value: 7.795000000000001
- type: recall_at_5
value: 9.435
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: None
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 3.4070000000000005
- type: map_at_10
value: 4.735
- type: map_at_100
value: 5.083
- type: map_at_1000
value: 5.162
- type: map_at_3
value: 4.261
- type: map_at_5
value: 4.504
- type: mrr_at_1
value: 4.1290000000000004
- type: mrr_at_10
value: 5.792
- type: mrr_at_100
value: 6.209
- type: mrr_at_1000
value: 6.283999999999999
- type: mrr_at_3
value: 5.173
- type: mrr_at_5
value: 5.505
- type: ndcg_at_1
value: 4.1290000000000004
- type: ndcg_at_10
value: 5.8020000000000005
- type: ndcg_at_100
value: 7.861
- type: ndcg_at_1000
value: 10.495000000000001
- type: ndcg_at_3
value: 4.79
- type: ndcg_at_5
value: 5.2299999999999995
- type: precision_at_1
value: 4.1290000000000004
- type: precision_at_10
value: 1.084
- type: precision_at_100
value: 0.262
- type: precision_at_1000
value: 0.06
- type: precision_at_3
value: 2.237
- type: precision_at_5
value: 1.6789999999999998
- type: recall_at_1
value: 3.4070000000000005
- type: recall_at_10
value: 8.057
- type: recall_at_100
value: 17.662
- type: recall_at_1000
value: 37.738
- type: recall_at_3
value: 5.27
- type: recall_at_5
value: 6.314
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: None
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 5.559
- type: map_at_10
value: 7.374
- type: map_at_100
value: 7.9159999999999995
- type: map_at_1000
value: 8.007
- type: map_at_3
value: 6.882000000000001
- type: map_at_5
value: 7.1209999999999996
- type: mrr_at_1
value: 6.622999999999999
- type: mrr_at_10
value: 8.873000000000001
- type: mrr_at_100
value: 9.478
- type: mrr_at_1000
value: 9.562
- type: mrr_at_3
value: 8.256
- type: mrr_at_5
value: 8.535
- type: ndcg_at_1
value: 6.622999999999999
- type: ndcg_at_10
value: 8.738999999999999
- type: ndcg_at_100
value: 11.931
- type: ndcg_at_1000
value: 14.862
- type: ndcg_at_3
value: 7.713
- type: ndcg_at_5
value: 8.116
- type: precision_at_1
value: 6.622999999999999
- type: precision_at_10
value: 1.493
- type: precision_at_100
value: 0.361
- type: precision_at_1000
value: 0.06899999999999999
- type: precision_at_3
value: 3.6069999999999998
- type: precision_at_5
value: 2.463
- type: recall_at_1
value: 5.559
- type: recall_at_10
value: 11.509
- type: recall_at_100
value: 26.573
- type: recall_at_1000
value: 49.16
- type: recall_at_3
value: 8.468
- type: recall_at_5
value: 9.64
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: None
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 7.291
- type: map_at_10
value: 9.99
- type: map_at_100
value: 10.659
- type: map_at_1000
value: 10.793999999999999
- type: map_at_3
value: 8.968
- type: map_at_5
value: 9.59
- type: mrr_at_1
value: 9.684
- type: mrr_at_10
value: 12.812000000000001
- type: mrr_at_100
value: 13.482
- type: mrr_at_1000
value: 13.575999999999999
- type: mrr_at_3
value: 11.561
- type: mrr_at_5
value: 12.232999999999999
- type: ndcg_at_1
value: 9.684
- type: ndcg_at_10
value: 12.281
- type: ndcg_at_100
value: 15.994
- type: ndcg_at_1000
value: 19.578
- type: ndcg_at_3
value: 10.525
- type: ndcg_at_5
value: 11.349
- type: precision_at_1
value: 9.684
- type: precision_at_10
value: 2.451
- type: precision_at_100
value: 0.5910000000000001
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 5.138
- type: precision_at_5
value: 3.794
- type: recall_at_1
value: 7.291
- type: recall_at_10
value: 16.28
- type: recall_at_100
value: 34.432
- type: recall_at_1000
value: 60.155
- type: recall_at_3
value: 10.767
- type: recall_at_5
value: 13.156
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: None
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 4.35
- type: map_at_10
value: 6.737
- type: map_at_100
value: 7.155
- type: map_at_1000
value: 7.257
- type: map_at_3
value: 6.0409999999999995
- type: map_at_5
value: 6.497
- type: mrr_at_1
value: 5.36
- type: mrr_at_10
value: 7.831
- type: mrr_at_100
value: 8.268
- type: mrr_at_1000
value: 8.373999999999999
- type: mrr_at_3
value: 7.086
- type: mrr_at_5
value: 7.529
- type: ndcg_at_1
value: 5.36
- type: ndcg_at_10
value: 8.179
- type: ndcg_at_100
value: 10.764999999999999
- type: ndcg_at_1000
value: 14.208000000000002
- type: ndcg_at_3
value: 6.762
- type: ndcg_at_5
value: 7.555000000000001
- type: precision_at_1
value: 5.36
- type: precision_at_10
value: 1.405
- type: precision_at_100
value: 0.292
- type: precision_at_1000
value: 0.066
- type: precision_at_3
value: 3.1419999999999995
- type: precision_at_5
value: 2.329
- type: recall_at_1
value: 4.35
- type: recall_at_10
value: 11.599
- type: recall_at_100
value: 24.606
- type: recall_at_1000
value: 52.128
- type: recall_at_3
value: 7.745
- type: recall_at_5
value: 9.747
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: None
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 1.496
- type: map_at_10
value: 2.412
- type: map_at_100
value: 2.899
- type: map_at_1000
value: 2.996
- type: map_at_3
value: 1.9949999999999999
- type: map_at_5
value: 2.171
- type: mrr_at_1
value: 3.1919999999999997
- type: mrr_at_10
value: 5.2589999999999995
- type: mrr_at_100
value: 6.053
- type: mrr_at_1000
value: 6.142
- type: mrr_at_3
value: 4.376
- type: mrr_at_5
value: 4.793
- type: ndcg_at_1
value: 3.1919999999999997
- type: ndcg_at_10
value: 3.81
- type: ndcg_at_100
value: 6.822
- type: ndcg_at_1000
value: 9.649000000000001
- type: ndcg_at_3
value: 2.817
- type: ndcg_at_5
value: 3.114
- type: precision_at_1
value: 3.1919999999999997
- type: precision_at_10
value: 1.29
- type: precision_at_100
value: 0.45199999999999996
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 2.085
- type: precision_at_5
value: 1.6680000000000001
- type: recall_at_1
value: 1.496
- type: recall_at_10
value: 5.053
- type: recall_at_100
value: 16.066
- type: recall_at_1000
value: 32.796
- type: recall_at_3
value: 2.662
- type: recall_at_5
value: 3.434
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: None
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 0.473
- type: map_at_10
value: 1.149
- type: map_at_100
value: 1.614
- type: map_at_1000
value: 1.7760000000000002
- type: map_at_3
value: 0.808
- type: map_at_5
value: 0.9520000000000001
- type: mrr_at_1
value: 9.0
- type: mrr_at_10
value: 13.528
- type: mrr_at_100
value: 14.567
- type: mrr_at_1000
value: 14.648
- type: mrr_at_3
value: 12.417
- type: mrr_at_5
value: 13.129
- type: ndcg_at_1
value: 6.375
- type: ndcg_at_10
value: 4.561
- type: ndcg_at_100
value: 5.412
- type: ndcg_at_1000
value: 8.173
- type: ndcg_at_3
value: 5.882
- type: ndcg_at_5
value: 5.16
- type: precision_at_1
value: 9.0
- type: precision_at_10
value: 4.45
- type: precision_at_100
value: 1.53
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 7.667
- type: precision_at_5
value: 6.1
- type: recall_at_1
value: 0.473
- type: recall_at_10
value: 2.11
- type: recall_at_100
value: 6.957000000000001
- type: recall_at_1000
value: 16.188
- type: recall_at_3
value: 1.031
- type: recall_at_5
value: 1.447
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: None
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 36.510000000000005
- type: f1
value: 32.55269059609507
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: None
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 1.735
- type: map_at_10
value: 2.7969999999999997
- type: map_at_100
value: 3.0300000000000002
- type: map_at_1000
value: 3.078
- type: map_at_3
value: 2.408
- type: map_at_5
value: 2.62
- type: mrr_at_1
value: 1.83
- type: mrr_at_10
value: 2.946
- type: mrr_at_100
value: 3.196
- type: mrr_at_1000
value: 3.2460000000000004
- type: mrr_at_3
value: 2.54
- type: mrr_at_5
value: 2.768
- type: ndcg_at_1
value: 1.83
- type: ndcg_at_10
value: 3.481
- type: ndcg_at_100
value: 4.9110000000000005
- type: ndcg_at_1000
value: 6.553000000000001
- type: ndcg_at_3
value: 2.661
- type: ndcg_at_5
value: 3.052
- type: precision_at_1
value: 1.83
- type: precision_at_10
value: 0.59
- type: precision_at_100
value: 0.13899999999999998
- type: precision_at_1000
value: 0.029
- type: precision_at_3
value: 1.16
- type: precision_at_5
value: 0.897
- type: recall_at_1
value: 1.735
- type: recall_at_10
value: 5.514
- type: recall_at_100
value: 12.671
- type: recall_at_1000
value: 26.081
- type: recall_at_3
value: 3.2649999999999997
- type: recall_at_5
value: 4.205
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: None
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 1.8519999999999999
- type: map_at_10
value: 3.3000000000000003
- type: map_at_100
value: 3.7699999999999996
- type: map_at_1000
value: 3.904
- type: map_at_3
value: 2.665
- type: map_at_5
value: 2.991
- type: mrr_at_1
value: 3.8580000000000005
- type: mrr_at_10
value: 6.611000000000001
- type: mrr_at_100
value: 7.244000000000001
- type: mrr_at_1000
value: 7.356999999999999
- type: mrr_at_3
value: 5.607
- type: mrr_at_5
value: 6.101
- type: ndcg_at_1
value: 3.8580000000000005
- type: ndcg_at_10
value: 5.081
- type: ndcg_at_100
value: 8.054
- type: ndcg_at_1000
value: 12.078999999999999
- type: ndcg_at_3
value: 3.934
- type: ndcg_at_5
value: 4.349
- type: precision_at_1
value: 3.8580000000000005
- type: precision_at_10
value: 1.6199999999999999
- type: precision_at_100
value: 0.477
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 2.881
- type: precision_at_5
value: 2.253
- type: recall_at_1
value: 1.8519999999999999
- type: recall_at_10
value: 7.109999999999999
- type: recall_at_100
value: 19.224
- type: recall_at_1000
value: 45.913
- type: recall_at_3
value: 3.6839999999999997
- type: recall_at_5
value: 4.999
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: None
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 1.789
- type: map_at_10
value: 2.761
- type: map_at_100
value: 2.997
- type: map_at_1000
value: 3.05
- type: map_at_3
value: 2.4330000000000003
- type: map_at_5
value: 2.612
- type: mrr_at_1
value: 3.579
- type: mrr_at_10
value: 5.311
- type: mrr_at_100
value: 5.692
- type: mrr_at_1000
value: 5.762
- type: mrr_at_3
value: 4.718
- type: mrr_at_5
value: 5.035
- type: ndcg_at_1
value: 3.579
- type: ndcg_at_10
value: 3.988
- type: ndcg_at_100
value: 5.508
- type: ndcg_at_1000
value: 7.3340000000000005
- type: ndcg_at_3
value: 3.183
- type: ndcg_at_5
value: 3.5589999999999997
- type: precision_at_1
value: 3.579
- type: precision_at_10
value: 1.002
- type: precision_at_100
value: 0.22599999999999998
- type: precision_at_1000
value: 0.047
- type: precision_at_3
value: 2.116
- type: precision_at_5
value: 1.569
- type: recall_at_1
value: 1.789
- type: recall_at_10
value: 5.01
- type: recall_at_100
value: 11.296000000000001
- type: recall_at_1000
value: 23.733999999999998
- type: recall_at_3
value: 3.174
- type: recall_at_5
value: 3.923
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: None
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 64.362
- type: ap
value: 59.55580844913024
- type: f1
value: 64.25451691590179
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: None
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 1.719
- type: map_at_10
value: 2.812
- type: map_at_100
value: 3.124
- type: map_at_1000
value: 3.18
- type: map_at_3
value: 2.4
- type: map_at_5
value: 2.598
- type: mrr_at_1
value: 1.7770000000000001
- type: mrr_at_10
value: 2.889
- type: mrr_at_100
value: 3.211
- type: mrr_at_1000
value: 3.2680000000000002
- type: mrr_at_3
value: 2.467
- type: mrr_at_5
value: 2.67
- type: ndcg_at_1
value: 1.762
- type: ndcg_at_10
value: 3.52
- type: ndcg_at_100
value: 5.343
- type: ndcg_at_1000
value: 7.217999999999999
- type: ndcg_at_3
value: 2.64
- type: ndcg_at_5
value: 2.9979999999999998
- type: precision_at_1
value: 1.762
- type: precision_at_10
value: 0.5950000000000001
- type: precision_at_100
value: 0.155
- type: precision_at_1000
value: 0.032
- type: precision_at_3
value: 1.127
- type: precision_at_5
value: 0.857
- type: recall_at_1
value: 1.719
- type: recall_at_10
value: 5.743
- type: recall_at_100
value: 14.89
- type: recall_at_1000
value: 30.267
- type: recall_at_3
value: 3.2779999999999996
- type: recall_at_5
value: 4.136
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: None
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 80.50615595075239
- type: f1
value: 80.1136210996985
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: None
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 54.031007751937985
- type: f1
value: 34.910049182212575
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: None
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.96973772696705
- type: f1
value: 51.482021499786136
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: None
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.19771351714862
- type: f1
value: 61.16551291933069
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: None
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 23.502491371355365
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: None
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 20.04508433667435
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: None
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 27.223268042111425
- type: mrr
value: 27.804265249287663
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: None
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 0.438
- type: map_at_10
value: 1.414
- type: map_at_100
value: 2.027
- type: map_at_1000
value: 2.866
- type: map_at_3
value: 0.9690000000000001
- type: map_at_5
value: 1.214
- type: mrr_at_1
value: 8.978
- type: mrr_at_10
value: 16.274
- type: mrr_at_100
value: 17.544999999999998
- type: mrr_at_1000
value: 17.649
- type: mrr_at_3
value: 13.674
- type: mrr_at_5
value: 15.021
- type: ndcg_at_1
value: 8.514
- type: ndcg_at_10
value: 7.301
- type: ndcg_at_100
value: 8.613999999999999
- type: ndcg_at_1000
value: 18.851000000000003
- type: ndcg_at_3
value: 8.193
- type: ndcg_at_5
value: 7.747999999999999
- type: precision_at_1
value: 8.978
- type: precision_at_10
value: 5.913
- type: precision_at_100
value: 3.198
- type: precision_at_1000
value: 1.6
- type: precision_at_3
value: 8.256
- type: precision_at_5
value: 7.1209999999999996
- type: recall_at_1
value: 0.438
- type: recall_at_10
value: 3.5360000000000005
- type: recall_at_100
value: 12.414
- type: recall_at_1000
value: 47.949000000000005
- type: recall_at_3
value: 1.462
- type: recall_at_5
value: 2.4299999999999997
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: None
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 2.2640000000000002
- type: map_at_10
value: 3.6859999999999995
- type: map_at_100
value: 4.071000000000001
- type: map_at_1000
value: 4.141
- type: map_at_3
value: 3.136
- type: map_at_5
value: 3.4130000000000003
- type: mrr_at_1
value: 2.52
- type: mrr_at_10
value: 4.093
- type: mrr_at_100
value: 4.51
- type: mrr_at_1000
value: 4.583
- type: mrr_at_3
value: 3.4909999999999997
- type: mrr_at_5
value: 3.791
- type: ndcg_at_1
value: 2.52
- type: ndcg_at_10
value: 4.696
- type: ndcg_at_100
value: 6.914
- type: ndcg_at_1000
value: 9.264999999999999
- type: ndcg_at_3
value: 3.5159999999999996
- type: ndcg_at_5
value: 4.026
- type: precision_at_1
value: 2.52
- type: precision_at_10
value: 0.855
- type: precision_at_100
value: 0.211
- type: precision_at_1000
value: 0.044000000000000004
- type: precision_at_3
value: 1.6420000000000001
- type: precision_at_5
value: 1.257
- type: recall_at_1
value: 2.2640000000000002
- type: recall_at_10
value: 7.478999999999999
- type: recall_at_100
value: 18.163
- type: recall_at_1000
value: 36.846000000000004
- type: recall_at_3
value: 4.268000000000001
- type: recall_at_5
value: 5.485
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: None
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 44.559
- type: map_at_10
value: 53.623
- type: map_at_100
value: 54.513999999999996
- type: map_at_1000
value: 54.584999999999994
- type: map_at_3
value: 51.229
- type: map_at_5
value: 52.635
- type: mrr_at_1
value: 51.23
- type: mrr_at_10
value: 58.431999999999995
- type: mrr_at_100
value: 59.00300000000001
- type: mrr_at_1000
value: 59.036
- type: mrr_at_3
value: 56.61000000000001
- type: mrr_at_5
value: 57.730000000000004
- type: ndcg_at_1
value: 51.28
- type: ndcg_at_10
value: 58.306000000000004
- type: ndcg_at_100
value: 61.915
- type: ndcg_at_1000
value: 63.343
- type: ndcg_at_3
value: 54.608000000000004
- type: ndcg_at_5
value: 56.431
- type: precision_at_1
value: 51.28
- type: precision_at_10
value: 8.755
- type: precision_at_100
value: 1.17
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 23.297
- type: precision_at_5
value: 15.598
- type: recall_at_1
value: 44.559
- type: recall_at_10
value: 67.491
- type: recall_at_100
value: 82.938
- type: recall_at_1000
value: 92.72200000000001
- type: recall_at_3
value: 56.952999999999996
- type: recall_at_5
value: 61.83
- type: map_at_1
value: 1.13
- type: map_at_10
value: 2.4570000000000003
- type: map_at_100
value: 3.048
- type: map_at_1000
value: 3.234
- type: map_at_3
value: 1.802
- type: map_at_5
value: 2.078
- type: mrr_at_1
value: 5.6000000000000005
- type: mrr_at_10
value: 9.468
- type: mrr_at_100
value: 10.472
- type: mrr_at_1000
value: 10.605
- type: mrr_at_3
value: 7.7829999999999995
- type: mrr_at_5
value: 8.468
- type: ndcg_at_1
value: 5.6000000000000005
- type: ndcg_at_10
value: 4.936999999999999
- type: ndcg_at_100
value: 8.597000000000001
- type: ndcg_at_1000
value: 13.508999999999999
- type: ndcg_at_3
value: 4.345000000000001
- type: ndcg_at_5
value: 3.782
- type: precision_at_1
value: 5.6000000000000005
- type: precision_at_10
value: 2.68
- type: precision_at_100
value: 0.814
- type: precision_at_1000
value: 0.201
- type: precision_at_3
value: 4.0
- type: precision_at_5
value: 3.2800000000000002
- type: recall_at_1
value: 1.13
- type: recall_at_10
value: 5.457999999999999
- type: recall_at_100
value: 16.533
- type: recall_at_1000
value: 40.983000000000004
- type: recall_at_3
value: 2.44
- type: recall_at_5
value: 3.343
- type: map_at_1
value: 0.055999999999999994
- type: map_at_10
value: 0.292
- type: map_at_100
value: 1.335
- type: map_at_1000
value: 3.074
- type: map_at_3
value: 0.123
- type: map_at_5
value: 0.191
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 38.879999999999995
- type: mrr_at_100
value: 39.953
- type: mrr_at_1000
value: 39.978
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 37.233
- type: ndcg_at_1
value: 22.0
- type: ndcg_at_10
value: 19.601
- type: ndcg_at_100
value: 14.735000000000001
- type: ndcg_at_1000
value: 14.915000000000001
- type: ndcg_at_3
value: 20.358
- type: ndcg_at_5
value: 21.543
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 21.2
- type: precision_at_100
value: 15.5
- type: precision_at_1000
value: 7.417999999999999
- type: precision_at_3
value: 22.667
- type: precision_at_5
value: 24.4
- type: recall_at_1
value: 0.055999999999999994
- type: recall_at_10
value: 0.44799999999999995
- type: recall_at_100
value: 3.3070000000000004
- type: recall_at_1000
value: 15.334
- type: recall_at_3
value: 0.13699999999999998
- type: recall_at_5
value: 0.27499999999999997
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: None
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 22.705180109905008
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: None
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 34.83434688813055
- task:
type: STS
dataset:
name: MTEB SICK-R
type: None
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 73.60605619256489
- type: cos_sim_spearman
value: 67.90225840700592
- type: euclidean_pearson
value: 72.33353541178548
- type: euclidean_spearman
value: 67.9022659941869
- type: manhattan_pearson
value: 72.05976338595539
- type: manhattan_spearman
value: 67.56691734710643
- task:
type: STS
dataset:
name: MTEB STS12
type: None
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 64.53970557195757
- type: cos_sim_spearman
value: 57.30488503100292
- type: euclidean_pearson
value: 61.892226450716926
- type: euclidean_spearman
value: 57.30614347479237
- type: manhattan_pearson
value: 62.211926976767394
- type: manhattan_spearman
value: 57.68789726090663
- task:
type: STS
dataset:
name: MTEB STS13
type: None
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 71.42835803449617
- type: cos_sim_spearman
value: 73.427655387467
- type: euclidean_pearson
value: 72.95603876012058
- type: euclidean_spearman
value: 73.42766761221965
- type: manhattan_pearson
value: 72.95156508487149
- type: manhattan_spearman
value: 73.50217040506452
- task:
type: STS
dataset:
name: MTEB STS14
type: None
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 68.76336169760297
- type: cos_sim_spearman
value: 65.84204583356208
- type: euclidean_pearson
value: 68.43410821913582
- type: euclidean_spearman
value: 65.84203615293073
- type: manhattan_pearson
value: 68.31068072556376
- type: manhattan_spearman
value: 65.83052670300172
- task:
type: STS
dataset:
name: MTEB STS15
type: None
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 70.72278060206496
- type: cos_sim_spearman
value: 72.94488223638993
- type: euclidean_pearson
value: 72.87272723558824
- type: euclidean_spearman
value: 72.9448808909619
- type: manhattan_pearson
value: 73.14312374863987
- type: manhattan_spearman
value: 73.17094226040652
- task:
type: STS
dataset:
name: MTEB STS16
type: None
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 67.38872313741369
- type: cos_sim_spearman
value: 69.39591053377866
- type: euclidean_pearson
value: 69.51934754021094
- type: euclidean_spearman
value: 69.39674025878926
- type: manhattan_pearson
value: 69.45552921345616
- type: manhattan_spearman
value: 69.43073792027799
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: None
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.8928962240664
- type: cos_sim_spearman
value: 78.20100249603948
- type: euclidean_pearson
value: 78.32388609298962
- type: euclidean_spearman
value: 78.20188000341075
- type: manhattan_pearson
value: 78.4500539248116
- type: manhattan_spearman
value: 78.19642157133745
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: None
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 58.85262050940674
- type: cos_sim_spearman
value: 58.37965417152291
- type: euclidean_pearson
value: 59.76016227940433
- type: euclidean_spearman
value: 58.37965417152291
- type: manhattan_pearson
value: 60.2166257965911
- type: manhattan_spearman
value: 58.747276855442045
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: None
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 65.88908703880968
- type: cos_sim_spearman
value: 64.7638356299519
- type: euclidean_pearson
value: 66.43284083997051
- type: euclidean_spearman
value: 64.76388404493919
- type: manhattan_pearson
value: 66.54689278447367
- type: manhattan_spearman
value: 64.76609191059656
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: None
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 62.39526919052546
- type: mrr
value: 83.57624673801143
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: None
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 11.472
- type: map_at_10
value: 15.892000000000001
- type: map_at_100
value: 16.75
- type: map_at_1000
value: 16.898
- type: map_at_3
value: 14.167
- type: map_at_5
value: 15.0
- type: mrr_at_1
value: 12.667
- type: mrr_at_10
value: 17.065
- type: mrr_at_100
value: 17.899
- type: mrr_at_1000
value: 18.035999999999998
- type: mrr_at_3
value: 15.443999999999999
- type: mrr_at_5
value: 16.228
- type: ndcg_at_1
value: 12.667
- type: ndcg_at_10
value: 18.856
- type: ndcg_at_100
value: 23.555999999999997
- type: ndcg_at_1000
value: 28.138
- type: ndcg_at_3
value: 15.360999999999999
- type: ndcg_at_5
value: 16.712
- type: precision_at_1
value: 12.667
- type: precision_at_10
value: 3.033
- type: precision_at_100
value: 0.563
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 6.444
- type: precision_at_5
value: 4.6
- type: recall_at_1
value: 11.472
- type: recall_at_10
value: 27.278000000000002
- type: recall_at_100
value: 49.917
- type: recall_at_1000
value: 86.75
- type: recall_at_3
value: 17.416999999999998
- type: recall_at_5
value: 20.75
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: None
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.31683168316832
- type: cos_sim_ap
value: 61.13034379900418
- type: cos_sim_f1
value: 58.92957746478873
- type: cos_sim_precision
value: 67.48387096774194
- type: cos_sim_recall
value: 52.300000000000004
- type: dot_accuracy
value: 99.31683168316832
- type: dot_ap
value: 61.13034379900418
- type: dot_f1
value: 58.92957746478873
- type: dot_precision
value: 67.48387096774194
- type: dot_recall
value: 52.300000000000004
- type: euclidean_accuracy
value: 99.31683168316832
- type: euclidean_ap
value: 61.13034379900418
- type: euclidean_f1
value: 58.92957746478873
- type: euclidean_precision
value: 67.48387096774194
- type: euclidean_recall
value: 52.300000000000004
- type: manhattan_accuracy
value: 99.34554455445544
- type: manhattan_ap
value: 63.09142729872116
- type: manhattan_f1
value: 61.02425876010782
- type: manhattan_precision
value: 66.19883040935673
- type: manhattan_recall
value: 56.599999999999994
- type: max_accuracy
value: 99.34554455445544
- type: max_ap
value: 63.09142729872116
- type: max_f1
value: 61.02425876010782
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: None
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 31.859456190950397
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: None
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 29.22083488612398
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: None
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 38.763497690161216
- type: mrr
value: 38.9332134368899
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: None
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.037408929578664
- type: cos_sim_spearman
value: 29.62877340560356
- type: dot_pearson
value: 31.037408876961713
- type: dot_spearman
value: 29.578544636218147
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: None
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 1.66
- type: map_at_10
value: 4.183
- type: map_at_100
value: 5.748
- type: map_at_1000
value: 6.645
- type: map_at_3
value: 3.024
- type: map_at_5
value: 3.711
- type: mrr_at_1
value: 24.490000000000002
- type: mrr_at_10
value: 30.226
- type: mrr_at_100
value: 31.849
- type: mrr_at_1000
value: 31.915
- type: mrr_at_3
value: 27.211000000000002
- type: mrr_at_5
value: 29.048000000000002
- type: ndcg_at_1
value: 23.469
- type: ndcg_at_10
value: 12.527
- type: ndcg_at_100
value: 17.624000000000002
- type: ndcg_at_1000
value: 28.534
- type: ndcg_at_3
value: 18.118000000000002
- type: ndcg_at_5
value: 15.520999999999999
- type: precision_at_1
value: 24.490000000000002
- type: precision_at_10
value: 9.592
- type: precision_at_100
value: 3.653
- type: precision_at_1000
value: 1.006
- type: precision_at_3
value: 17.687
- type: precision_at_5
value: 14.285999999999998
- type: recall_at_1
value: 1.66
- type: recall_at_10
value: 6.419
- type: recall_at_100
value: 20.97
- type: recall_at_1000
value: 55.001
- type: recall_at_3
value: 3.37
- type: recall_at_5
value: 4.855
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: None
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 64.55300000000001
- type: ap
value: 11.51171190900715
- type: f1
value: 49.64107076870409
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: None
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 52.857951329937755
- type: f1
value: 52.984245378050296
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: None
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 25.391338056888934
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: None
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.24491863861239
- type: cos_sim_ap
value: 63.21977665263634
- type: cos_sim_f1
value: 60.90813587019961
- type: cos_sim_precision
value: 54.61586769939293
- type: cos_sim_recall
value: 68.83905013192611
- type: dot_accuracy
value: 83.24491863861239
- type: dot_ap
value: 63.21977665263634
- type: dot_f1
value: 60.90813587019961
- type: dot_precision
value: 54.61586769939293
- type: dot_recall
value: 68.83905013192611
- type: euclidean_accuracy
value: 83.24491863861239
- type: euclidean_ap
value: 63.21977665263634
- type: euclidean_f1
value: 60.90813587019961
- type: euclidean_precision
value: 54.61586769939293
- type: euclidean_recall
value: 68.83905013192611
- type: manhattan_accuracy
value: 83.05418131966383
- type: manhattan_ap
value: 62.73044800285885
- type: manhattan_f1
value: 60.47024246877296
- type: manhattan_precision
value: 56.42138939670932
- type: manhattan_recall
value: 65.14511873350924
- type: max_accuracy
value: 83.24491863861239
- type: max_ap
value: 63.21977665263634
- type: max_f1
value: 60.90813587019961
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: None
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 85.89086816470679
- type: cos_sim_ap
value: 78.81106183704443
- type: cos_sim_f1
value: 71.13646466143133
- type: cos_sim_precision
value: 68.54654483152484
- type: cos_sim_recall
value: 73.92978133661842
- type: dot_accuracy
value: 85.89086816470679
- type: dot_ap
value: 78.81106438949705
- type: dot_f1
value: 71.13646466143133
- type: dot_precision
value: 68.54654483152484
- type: dot_recall
value: 73.92978133661842
- type: euclidean_accuracy
value: 85.89086816470679
- type: euclidean_ap
value: 78.81106117828325
- type: euclidean_f1
value: 71.13646466143133
- type: euclidean_precision
value: 68.54654483152484
- type: euclidean_recall
value: 73.92978133661842
- type: manhattan_accuracy
value: 85.89474909768309
- type: manhattan_ap
value: 78.67476153897563
- type: manhattan_f1
value: 70.78659868900219
- type: manhattan_precision
value: 67.15726920950802
- type: manhattan_recall
value: 74.83061287342161
- type: max_accuracy
value: 85.89474909768309
- type: max_ap
value: 78.81106438949705
- type: max_f1
value: 71.13646466143133
---
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
medspaner/flair-clinical-trials-medic-attr | medspaner | null | [
"license:cc-by-nc-4.0",
"region:us"
] | 1,695 | 1,727 | 0 | 0 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: flair-clinical-trials-medic-attr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flair-clinical-trials-medic-attr
This named entity recognition model detects medication-related information:
- Contraindication: e.g. *contraindicación a **aspirina***
- Dose, strength or concentration: e.g. *14 mg*, *100.000 UI*
- Form: e.g. *tabletas*, *comprimidos*
- Route: e.g. *vía oral*, *i.v.*
The model achieves the following results on the test set (results are averaged over 5 evaluation rounds):
- Precision: 0.895 (±0.002)
- Recall: 0.740 (±0.015)
- F1: 0.810 (±0.009)
- Accuracy: 0.696
## Model description
This model is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials using the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
If you use this model, please, cite as follows:
```
@article{campillosetal2024,
title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
year={2024},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 16
- seed: we used different initializations for 5 evaluation rounds, and uploaded the model with the best results
- num_epochs: average 68.40 epochs (±7.47); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5)
### Training results (test set; average and standard deviation of 5 rounds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.895 (±0.002) | 0.740 (±0.015) | 0.810 (±0.009) | 0.696 (±0.009) |
### Framework versions
- FLAIR 0.12
- Pytorch 1.10.2+cu116
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] | BioNLP |
SIRIS-Lab/AIObioEnts-AnatEM-pubmedbert-full | SIRIS-Lab | token-classification | [
"transformers",
"safetensors",
"bert",
"token-classification",
"ner",
"biomedicine",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,731 | 1,734 | 45 | 0 | ---
base_model:
- microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
library_name: transformers
license: mit
pipeline_tag: token-classification
tags:
- ner
- biomedicine
---
# AIObioEnts: All-in-one biomedical entities
Biomedical named-entity recognition following the all-in-one NER (AIONER) scheme introduced by [Luo *et al.*](https://doi.org/10.1093/bioinformatics/btad310). This is a straightforward Hugging-Face-compatible implementation without using a decoding head for ease of integration with other pipelines.
**For full details, see the [main GitHub repository](https://github.com/sirisacademic/AIObioEnts/)**
## Anatomical biomedical entities
We have followed the original AIONER training pipeline based on the BioRED dataset along with additional BioRED-compatible datasets for set of core entities (Gene, Disease, Chemical, Species, Variant, Cell line), which we have fine-tuned using a modified version of the latest release of the [AnatEM](https://nactem.ac.uk/anatomytagger/#AnatEM) corpus, and a subset of entities that are of interest to us: *cell*, *cell component*, *tissue*, *muti-tissue structure*, and *organ*, along with the newly-introduced *cancer*. This model corresponds to the implementation based on [BiomedBERT-base pre-trained on both abstracts from PubMed and full-texts articles from PubMedCentral](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext)
**F1 scores**
The F1 scores on the test set of this modified dataset are shown below:
| | **BiomedBERT-base abstract+fulltext** |
| -------------------------- | :-----------------------------------: |
| **Cell** | 87.76 |
| **Cell component** | 81.74 |
| **Tissue** | 72.26 |
| **Cancer** | 89.29 |
| **Organ** | 84.18 |
| **Multi-tissue structure** | 72.65 |
| | | | |
| **Overall** | 84.22 |
## Usage
The model can be directly used from HuggingFace in a NER pipeline. However, we note that:
- The model was trained on sentence-level data, and it works best when the input is split
- Each sentence to tag must be surrounded by the flag corresponding to the entity type one wishes to identify, as in: `<entity_type>sentence</entity_type>`. In the case of this fine-tuned model, the entity type should be `'ALL'`.
- Since additional `'O'` labels are used in the AIONER scheme, the outputs should be postprocessed before aggregating the tags
We provide helper functions to tag individual texts in the [main repository](https://github.com/sirisacademic/AIObioEnts/)
````python
from tagging_fn import process_one_text
from transformers import pipeline
pipe = pipeline('ner', model='SIRIS-Lab/AIObioEnts-AnatEM-pubmedbert-full', aggregation_strategy='none', device=0)
process_one_text(text_to_tag, pipeline=pipe, entity_type='ALL')
````
## References
[[1] Ling Luo, Chih-Hsuan Wei, Po-Ting Lai, Robert Leaman, Qingyu Chen, and Zhiyong Lu. "AIONER: All-in-one scheme-based biomedical named entity recognition using deep learning." Bioinformatics, Volume 39, Issue 5, May 2023, btad310.](https://doi.org/10.1093/bioinformatics/btad310)
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"ANATEM",
"BIORED"
] | BioNLP |
gmonsoon/gemma2-9b-cpt-sahabatai-v1-instruct-GGUF | gmonsoon | null | [
"gguf",
"en",
"id",
"jv",
"su",
"arxiv:2309.06085",
"arxiv:2310.04928",
"arxiv:2311.07911",
"base_model:GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct",
"base_model:quantized:GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,731 | 1,731 | 2,514 | 5 | ---
base_model:
- GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct
language:
- en
- id
- jv
- su
license: gemma
---
# Gemma2 9B CPT Sahabat-AI v1 Instruct
**Sahabat-AI** (Indonesian language for “close friends”) is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for Indonesian language and its various dialects. Sahabat-AI ecosystem is co-initiated by Indonesian tech and telecommunication companies: GoTo Group and Indosat Ooredoo Hutchison.
Gemma2 9B CPT Sahabat-AI v1 Instruct is an Indonesian-focused model which has been fine-tuned with around **448,000 Indonesian instruction-completion pairs** alongside an Indonesian-dialect pool consisting of **96,000 instruction-completion pairs in Javanese** and **98,000 instruction-completion pairs in Sundanese**. Additionally, we added a pool of **129,000 instruction-completion pairs in English**.
- **Co-initiated by:** PT GoTo Gojek Tokopedia Tbk, Indosat Ooredoo Hutchison
- **Developed by:** PT GoTo Gojek Tokopedia Tbk, AI Singapore
- **Model type:** Decoder
- **Languages:** English, Indonesian, Javanese, Sundanese
- **License:** [Gemma Community License](https://ai.google.dev/gemma/terms)
## Model Details
### Model Description
We performed instruction tuning in Indonesian, Javanese, Sundanese as well as English on our [continued pre-trained Gemma2 9B CPT Sahabat-AI v1](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-base), a decoder model using the Gemma2 architecture, to create Gemma2 9B CPT Sahabat-AI v1 Instruct.
For tokenisation, the model employs the default tokenizer used in Gemma-2-9B. The model has a context length of 8192.
### Benchmark Performance
We evaluated Gemma2 9B CPT Sahabat-AI V1 Instruct on both general language capabilities and instruction-following capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the
- [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
- These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
- We also added support for Javanese and Sundanese for the BHASA tasks whenever applicable
- [IndoMMLU](https://arxiv.org/pdf/2310.04928)
- These tasks include examination questions on Humanities, Indonesian language, Local languages and cultures, Social science and STEM across primary, middle, and high school levels.
- and the common English tasks from the [HuggingFace LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard).
- These tasks consist of [IFEval, BBH, Math Lvl 5, GPQA, MuSR, and MMLU-PRO.](https://huggingface.co/docs/leaderboards/open_llm_leaderboard/about)
- **Caveat**: Our results differ from the HuggingFace LLM Leaderboard because we have used [VLLM](https://docs.vllm.ai/en/latest/) as our inference platform. VLLM caps the context size at **4096 tokens** while HuggingFace was set to **8192 tokens**.
Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done **zero-shot** with native prompts on a sample of 100-1000 instances for each dataset.
#### Instruction-following Capabilities
Since Gemma2 9B CPT Sahabat-AI v1 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with the [IFEval](https://arxiv.org/abs/2311.07911) dataset.
As this dataset was in English, the linguists and native speakers in the team worked together to filter, localize and translate the dataset into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
**IFEval**
IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalized by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
*Note*: IFEval was only used on Bahasa Indonesia. We are currently working on adding it for Javanese and Sundanese for our upcoming releases.
#### Results
#### Indonesian Results
#### SEA HELM (also known as BHASA)
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px; font-weight: bold;">Language / Model Name [Instruct]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Overall (Bahasa Indonesia + Javanese + Sundanese)</td>
<td style="border: 1px solid gray; padding: 8px;">36.963</td>
<td style="border: 1px solid gray; padding: 8px;">42.988</td>
<td style="border: 1px solid gray; padding: 8px;">37.805</td>
<td style="border: 1px solid gray; padding: 8px;">45.866</td>
<td style="border: 1px solid gray; padding: 8px;">46.880</td>
<td style="border: 1px solid gray; padding: 8px;">56.359</td>
<td style="border: 1px solid gray; padding: 8px;">53.725</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">61.169</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Bahasa Indonesia</td>
<td style="border: 1px solid gray; padding: 8px;">46.760</td>
<td style="border: 1px solid gray; padding: 8px;">60.372</td>
<td style="border: 1px solid gray; padding: 8px;">42.022</td>
<td style="border: 1px solid gray; padding: 8px;">51.944</td>
<td style="border: 1px solid gray; padding: 8px;">54.579</td>
<td style="border: 1px solid gray; padding: 8px;">63.394</td>
<td style="border: 1px solid gray; padding: 8px;">57.221</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">64.154</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Javanese</td>
<td style="border: 1px solid gray; padding: 8px;">33.956</td>
<td style="border: 1px solid gray; padding: 8px;">40.625</td>
<td style="border: 1px solid gray; padding: 8px;">41.739</td>
<td style="border: 1px solid gray; padding: 8px;">47.587</td>
<td style="border: 1px solid gray; padding: 8px;">48.012</td>
<td style="border: 1px solid gray; padding: 8px;">56.468</td>
<td style="border: 1px solid gray; padding: 8px;">56.460</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">64.439</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Sundanese</td>
<td style="border: 1px solid gray; padding: 8px;">30.173</td>
<td style="border: 1px solid gray; padding: 8px;">27.969</td>
<td style="border: 1px solid gray; padding: 8px;">29.654</td>
<td style="border: 1px solid gray; padding: 8px;">38.068</td>
<td style="border: 1px solid gray; padding: 8px;">38.050</td>
<td style="border: 1px solid gray; padding: 8px;">49.216</td>
<td style="border: 1px solid gray; padding: 8px;">47.495</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">54.913</td>
</tr>
</table>
#### IndoMMLU
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px; font-weight: bold;">Model Name [Instruct]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Meta-Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Overall Results</td>
<td style="border: 1px solid gray; padding: 8px;">53.0%</td>
<td style="border: 1px solid gray; padding: 8px;">56.0%</td>
<td style="border: 1px solid gray; padding: 8px;">51.9%</td>
<td style="border: 1px solid gray; padding: 8px;">53.8%</td>
<td style="border: 1px solid gray; padding: 8px;">54.4%</td>
<td style="border: 1px solid gray; padding: 8px;">61.4%</td>
<td style="border: 1px solid gray; padding: 8px;">55.6%</td>
<td style="border: 2px solid black; padding: 8px; background-color: lightgreen;">62.6%</td>
</tr>
</table>
#### English Results
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px;">Model Name [Instruct]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Average</td>
<td style="border: 1px solid gray; padding: 8px;">24.48</td>
<td style="border: 1px solid gray; padding: 8px;">27.75</td>
<td style="border: 1px solid gray; padding: 8px;">23.91</td>
<td style="border: 1px solid gray; padding: 8px;">27.98</td>
<td style="border: 1px solid gray; padding: 8px;">24.52</td>
<td style="border: 1px solid gray; padding: 8px;">26.44</td>
<td style="border: 1px solid gray; padding: 8px;">24.43</td>
<td style="border: 1px solid black; padding: 8px; background-color: lightgreen;">33.67</td>
</tr>
</table>
Gemma2 9B CPT Sahabat-AI v1 Instruct can be run using the 🤗 Transformers library
```python
# Please use transformers==4.45.0
import torch
import transformers
model_id = "GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
# Javanese
messages = [
{"role": "user", "content": "Sopo wae sing ana ing Punakawan?"}
]
outputs = pipeline(
messages,
max_new_tokens=256,
eos_token_id=terminators,
)
print(outputs[0]["generated_text"][-1])
# Sundanese
messages = [
{"role": "user", "content": "Kumaha caritana si Kabayan?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
eos_token_id=terminators,
)
print(outputs[0]["generated_text"][-1])
```
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current Sahabat-AI models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
Gemma2 9B CPT Sahabat-AI v1 Instruct was built using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 4 hours, with alignment taking 2 hours, both on 8x H100-80GB GPUs.
## Data
Gemma2 9B CPT Sahabat-AI v1 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
## Call for Collaboration
Sahabat-AI (Indonesian language for “close friends”) a **local open source Large Language Model (LLM) ecosystem in Indonesian language**, co-initiated by Indonesian tech and telecommunication companies: GoTo Group and Indosat Ooredoo Hutchison.
Sahabat-AI ecosystem aims to empower Indonesians who want to develop AI-based services and applications using Bahasa Indonesia and its various local dialects.
We are supported by research centers and global tech experts such as AI Singapore and Tech Mahendra to train the model to gain general language understanding.
We also collaborate with key top Indonesia universities such as University of Indonesia, Gadjah Mada University, Bogor Institute of Agriculture, Bandung Institute of Technology, including top Indonesia media groups, such as Kompas Gramedia Group and Republika to train and enrich the model in Bahasa Indonesia, ensuring optimum provision of local context and cultural relevance.
We would like to invite **researchers, developers, and language enthusiasts** to actively contribute to the enhancement and expansion of Sahabat-AI.
Your collaborations can involve:
- Identifying and reporting technical issues
- Sharing pre-training, instruction, and preference data
- Improving documentation usability
- Proposing and implementing new model evaluation tasks and metrics
Join us in shaping the future of Sahabat-AI by sharing your expertise and insights to make these models more accessible, accurate, and versatile.
You can contribute your ideas through [this form.](https://docs.google.com/forms/d/1_us969eQtEooYOn4XkvGkdP5VHOyCbO6L_sd9kTMnaA/edit)
## The Development Team (in ascending alphabetical order)
### AI Singapore
Chan Adwin<br>
Cheng Nicholas<br>
Choa Esther<br>
Huang Yuli<br>
Lau Wayne<br>
Lee Chwan Ren<br>
Leong Wai Yi<br>
Leong Wei Qi<br>
Limkonchotiwat Peerat<br>
Liu Bing Jie Darius<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Brandon<br>
Ong Tat-Wee David<br>
Ong Zhi Hao<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teng Walter<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
### PT GoTo Gojek Tokopedia Tbk
Anissa Dininta<br>
Chau Shiau Ching<br>
Choiri Hendra Hadhil<br>
Goel Priyank<br>
Saini Ajay Kumar<br>
Shalev Ofir<br>
Tan Daryl<br>
Tep Kilian Rithi<br>
Tiwari Anupam<br>
Widjojo Daniel<br>
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [Sahabat-AI Inquiry Form.](https://docs.google.com/forms/d/1_us969eQtEooYOn4XkvGkdP5VHOyCbO6L_sd9kTMnaA/edit)
## Disclaimer
This is the repository for the Instruct model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## References
### IndoMMLU Reference
```bibtex
@inproceedings{koto-etal-2023-indommlu,
title = "Large Language Models Only Pass Primary School Exams in {I}ndonesia: A Comprehensive Test on {I}ndo{MMLU}",
author = "Fajri Koto and Nurul Aisyah and Haonan Li and Timothy Baldwin",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = December,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
}
}
``` | [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] | Non_BioNLP |
twadada/GTE_wl_mv | twadada | null | [
"mteb",
"model-index",
"region:us"
] | 1,736 | 1,736 | 0 | 0 | ---
tags:
- mteb
model-index:
- name: gte_WORDLLAMA_MODEL2VEC_result
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: None
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.13432835820896
- type: ap
value: 35.167459200441506
- type: f1
value: 66.74544259725131
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: None
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 71.5158
- type: ap
value: 65.87290139797425
- type: f1
value: 71.31117308043078
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: None
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.032
- type: f1
value: 36.34554421029957
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: None
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 23.541999999999998
- type: map_at_10
value: 38.172
- type: map_at_100
value: 39.339
- type: map_at_1000
value: 39.353
- type: map_at_3
value: 33.286
- type: map_at_5
value: 35.942
- type: mrr_at_1
value: 24.253
- type: mrr_at_10
value: 38.423
- type: mrr_at_100
value: 39.589
- type: mrr_at_1000
value: 39.604
- type: mrr_at_3
value: 33.559
- type: mrr_at_5
value: 36.169000000000004
- type: ndcg_at_1
value: 23.541999999999998
- type: ndcg_at_10
value: 46.660000000000004
- type: ndcg_at_100
value: 51.800999999999995
- type: ndcg_at_1000
value: 52.147
- type: ndcg_at_3
value: 36.498000000000005
- type: ndcg_at_5
value: 41.309000000000005
- type: precision_at_1
value: 23.541999999999998
- type: precision_at_10
value: 7.396999999999999
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.268
- type: precision_at_5
value: 11.508000000000001
- type: recall_at_1
value: 23.541999999999998
- type: recall_at_10
value: 73.969
- type: recall_at_100
value: 96.871
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 45.804
- type: recall_at_5
value: 57.538999999999994
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: None
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 39.8392617925804
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: None
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 29.39147233524174
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: None
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 55.43457632808065
- type: mrr
value: 69.7011168271556
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: None
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 79.40924171268267
- type: cos_sim_spearman
value: 76.48728498335026
- type: euclidean_pearson
value: 78.11322656013188
- type: euclidean_spearman
value: 76.48728498335026
- type: manhattan_pearson
value: 78.39882365124392
- type: manhattan_spearman
value: 76.55837094044142
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: None
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 75.63311688311688
- type: f1
value: 74.89031278068427
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: None
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 34.47759744268641
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: None
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 26.72176842867392
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: None
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 21.918000000000003
- type: map_at_10
value: 29.912
- type: map_at_100
value: 31.205
- type: map_at_1000
value: 31.357000000000003
- type: map_at_3
value: 27.206000000000003
- type: map_at_5
value: 28.613
- type: mrr_at_1
value: 27.897
- type: mrr_at_10
value: 35.921
- type: mrr_at_100
value: 36.825
- type: mrr_at_1000
value: 36.894
- type: mrr_at_3
value: 33.858
- type: mrr_at_5
value: 34.881
- type: ndcg_at_1
value: 27.897
- type: ndcg_at_10
value: 35.306
- type: ndcg_at_100
value: 40.955999999999996
- type: ndcg_at_1000
value: 43.909
- type: ndcg_at_3
value: 31.422
- type: ndcg_at_5
value: 32.89
- type: precision_at_1
value: 27.897
- type: precision_at_10
value: 6.9239999999999995
- type: precision_at_100
value: 1.233
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 15.451
- type: precision_at_5
value: 11.044
- type: recall_at_1
value: 21.918000000000003
- type: recall_at_10
value: 45.171
- type: recall_at_100
value: 70.226
- type: recall_at_1000
value: 90.279
- type: recall_at_3
value: 32.657000000000004
- type: recall_at_5
value: 37.372
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: None
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 20.456
- type: map_at_10
value: 26.596999999999998
- type: map_at_100
value: 27.639999999999997
- type: map_at_1000
value: 27.766000000000002
- type: map_at_3
value: 24.487000000000002
- type: map_at_5
value: 25.683
- type: mrr_at_1
value: 25.605
- type: mrr_at_10
value: 31.326999999999998
- type: mrr_at_100
value: 32.133
- type: mrr_at_1000
value: 32.198
- type: mrr_at_3
value: 29.310000000000002
- type: mrr_at_5
value: 30.431
- type: ndcg_at_1
value: 25.605
- type: ndcg_at_10
value: 30.728
- type: ndcg_at_100
value: 35.318
- type: ndcg_at_1000
value: 38.082
- type: ndcg_at_3
value: 27.226
- type: ndcg_at_5
value: 28.828
- type: precision_at_1
value: 25.605
- type: precision_at_10
value: 5.561
- type: precision_at_100
value: 1.001
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 12.717999999999998
- type: precision_at_5
value: 9.134
- type: recall_at_1
value: 20.456
- type: recall_at_10
value: 38.476
- type: recall_at_100
value: 58.120000000000005
- type: recall_at_1000
value: 76.793
- type: recall_at_3
value: 28.232000000000003
- type: recall_at_5
value: 32.53
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: None
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 28.088
- type: map_at_10
value: 37.584
- type: map_at_100
value: 38.75
- type: map_at_1000
value: 38.842999999999996
- type: map_at_3
value: 34.839999999999996
- type: map_at_5
value: 36.352000000000004
- type: mrr_at_1
value: 32.476
- type: mrr_at_10
value: 40.892
- type: mrr_at_100
value: 41.792
- type: mrr_at_1000
value: 41.845
- type: mrr_at_3
value: 38.474000000000004
- type: mrr_at_5
value: 39.818999999999996
- type: ndcg_at_1
value: 32.476
- type: ndcg_at_10
value: 42.811
- type: ndcg_at_100
value: 48.045
- type: ndcg_at_1000
value: 50.09400000000001
- type: ndcg_at_3
value: 37.830000000000005
- type: ndcg_at_5
value: 40.168
- type: precision_at_1
value: 32.476
- type: precision_at_10
value: 7.034
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 16.949
- type: precision_at_5
value: 11.799
- type: recall_at_1
value: 28.088
- type: recall_at_10
value: 55.318
- type: recall_at_100
value: 78.66499999999999
- type: recall_at_1000
value: 93.415
- type: recall_at_3
value: 41.865
- type: recall_at_5
value: 47.675
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: None
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 13.13
- type: map_at_10
value: 18.506
- type: map_at_100
value: 19.405
- type: map_at_1000
value: 19.516
- type: map_at_3
value: 16.821
- type: map_at_5
value: 17.782
- type: mrr_at_1
value: 14.124
- type: mrr_at_10
value: 19.767000000000003
- type: mrr_at_100
value: 20.66
- type: mrr_at_1000
value: 20.755000000000003
- type: mrr_at_3
value: 18.023
- type: mrr_at_5
value: 19.0
- type: ndcg_at_1
value: 14.124
- type: ndcg_at_10
value: 21.728
- type: ndcg_at_100
value: 26.422
- type: ndcg_at_1000
value: 29.73
- type: ndcg_at_3
value: 18.312
- type: ndcg_at_5
value: 19.993
- type: precision_at_1
value: 14.124
- type: precision_at_10
value: 3.4459999999999997
- type: precision_at_100
value: 0.617
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 7.91
- type: precision_at_5
value: 5.695
- type: recall_at_1
value: 13.13
- type: recall_at_10
value: 30.470000000000002
- type: recall_at_100
value: 52.449
- type: recall_at_1000
value: 78.25
- type: recall_at_3
value: 21.209
- type: recall_at_5
value: 25.281
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: None
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 7.7
- type: map_at_10
value: 12.333
- type: map_at_100
value: 13.367999999999999
- type: map_at_1000
value: 13.492
- type: map_at_3
value: 10.747
- type: map_at_5
value: 11.645999999999999
- type: mrr_at_1
value: 9.826
- type: mrr_at_10
value: 14.81
- type: mrr_at_100
value: 15.854
- type: mrr_at_1000
value: 15.953000000000001
- type: mrr_at_3
value: 13.039000000000001
- type: mrr_at_5
value: 14.046
- type: ndcg_at_1
value: 9.826
- type: ndcg_at_10
value: 15.437000000000001
- type: ndcg_at_100
value: 21.009
- type: ndcg_at_1000
value: 24.515
- type: ndcg_at_3
value: 12.349
- type: ndcg_at_5
value: 13.850000000000001
- type: precision_at_1
value: 9.826
- type: precision_at_10
value: 3.01
- type: precision_at_100
value: 0.692
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 6.053
- type: precision_at_5
value: 4.577
- type: recall_at_1
value: 7.7
- type: recall_at_10
value: 22.546
- type: recall_at_100
value: 47.648
- type: recall_at_1000
value: 73.655
- type: recall_at_3
value: 14.289
- type: recall_at_5
value: 17.994
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: None
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 19.886
- type: map_at_10
value: 26.63
- type: map_at_100
value: 27.944999999999997
- type: map_at_1000
value: 28.097
- type: map_at_3
value: 24.077
- type: map_at_5
value: 25.378
- type: mrr_at_1
value: 24.254
- type: mrr_at_10
value: 31.416
- type: mrr_at_100
value: 32.425
- type: mrr_at_1000
value: 32.501999999999995
- type: mrr_at_3
value: 28.793999999999997
- type: mrr_at_5
value: 30.237000000000002
- type: ndcg_at_1
value: 24.254
- type: ndcg_at_10
value: 31.524
- type: ndcg_at_100
value: 37.658
- type: ndcg_at_1000
value: 40.722
- type: ndcg_at_3
value: 26.953
- type: ndcg_at_5
value: 28.919
- type: precision_at_1
value: 24.254
- type: precision_at_10
value: 5.881
- type: precision_at_100
value: 1.072
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 12.479999999999999
- type: precision_at_5
value: 9.105
- type: recall_at_1
value: 19.886
- type: recall_at_10
value: 41.593
- type: recall_at_100
value: 68.43599999999999
- type: recall_at_1000
value: 89.041
- type: recall_at_3
value: 28.723
- type: recall_at_5
value: 33.804
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: None
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 15.821
- type: map_at_10
value: 21.898999999999997
- type: map_at_100
value: 23.189
- type: map_at_1000
value: 23.323
- type: map_at_3
value: 19.634999999999998
- type: map_at_5
value: 20.848
- type: mrr_at_1
value: 19.064
- type: mrr_at_10
value: 25.784000000000002
- type: mrr_at_100
value: 26.828999999999997
- type: mrr_at_1000
value: 26.904
- type: mrr_at_3
value: 23.573
- type: mrr_at_5
value: 24.812
- type: ndcg_at_1
value: 19.064
- type: ndcg_at_10
value: 26.229999999999997
- type: ndcg_at_100
value: 32.326
- type: ndcg_at_1000
value: 35.435
- type: ndcg_at_3
value: 22.070999999999998
- type: ndcg_at_5
value: 23.93
- type: precision_at_1
value: 19.064
- type: precision_at_10
value: 4.966
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 10.54
- type: precision_at_5
value: 7.785
- type: recall_at_1
value: 15.821
- type: recall_at_10
value: 35.516
- type: recall_at_100
value: 61.971
- type: recall_at_1000
value: 83.848
- type: recall_at_3
value: 23.97
- type: recall_at_5
value: 28.662
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 15.921916666666666
- type: map_at_10
value: 21.780166666666666
- type: map_at_100
value: 22.84433333333333
- type: map_at_1000
value: 22.975916666666667
- type: map_at_3
value: 19.735916666666665
- type: map_at_5
value: 20.860416666666666
- type: mrr_at_1
value: 19.054249999999996
- type: mrr_at_10
value: 25.021333333333335
- type: mrr_at_100
value: 25.93491666666667
- type: mrr_at_1000
value: 26.019166666666667
- type: mrr_at_3
value: 23.03583333333333
- type: mrr_at_5
value: 24.140000000000004
- type: ndcg_at_1
value: 19.054249999999996
- type: ndcg_at_10
value: 25.70233333333334
- type: ndcg_at_100
value: 30.890500000000003
- type: ndcg_at_1000
value: 34.02575
- type: ndcg_at_3
value: 22.017666666666663
- type: ndcg_at_5
value: 23.718666666666664
- type: precision_at_1
value: 19.054249999999996
- type: precision_at_10
value: 4.622083333333333
- type: precision_at_100
value: 0.86825
- type: precision_at_1000
value: 0.13258333333333333
- type: precision_at_3
value: 10.176166666666669
- type: precision_at_5
value: 7.382749999999999
- type: recall_at_1
value: 15.921916666666666
- type: recall_at_10
value: 34.314833333333326
- type: recall_at_100
value: 57.83341666666667
- type: recall_at_1000
value: 80.45625000000001
- type: recall_at_3
value: 23.967166666666667
- type: recall_at_5
value: 28.36841666666666
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: None
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 12.857
- type: map_at_10
value: 17.826
- type: map_at_100
value: 18.677
- type: map_at_1000
value: 18.775
- type: map_at_3
value: 16.227
- type: map_at_5
value: 17.168
- type: mrr_at_1
value: 14.877
- type: mrr_at_10
value: 19.784
- type: mrr_at_100
value: 20.662
- type: mrr_at_1000
value: 20.746000000000002
- type: mrr_at_3
value: 18.175
- type: mrr_at_5
value: 19.08
- type: ndcg_at_1
value: 14.877
- type: ndcg_at_10
value: 20.987000000000002
- type: ndcg_at_100
value: 25.654
- type: ndcg_at_1000
value: 28.360000000000003
- type: ndcg_at_3
value: 17.919
- type: ndcg_at_5
value: 19.404
- type: precision_at_1
value: 14.877
- type: precision_at_10
value: 3.528
- type: precision_at_100
value: 0.641
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 8.129
- type: precision_at_5
value: 5.798
- type: recall_at_1
value: 12.857
- type: recall_at_10
value: 28.864
- type: recall_at_100
value: 50.943000000000005
- type: recall_at_1000
value: 71.158
- type: recall_at_3
value: 20.330000000000002
- type: recall_at_5
value: 24.03
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: None
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 8.823
- type: map_at_10
value: 12.664
- type: map_at_100
value: 13.447000000000001
- type: map_at_1000
value: 13.58
- type: map_at_3
value: 11.372
- type: map_at_5
value: 12.052
- type: mrr_at_1
value: 10.84
- type: mrr_at_10
value: 15.135000000000002
- type: mrr_at_100
value: 15.919
- type: mrr_at_1000
value: 16.026
- type: mrr_at_3
value: 13.702
- type: mrr_at_5
value: 14.496
- type: ndcg_at_1
value: 10.84
- type: ndcg_at_10
value: 15.375
- type: ndcg_at_100
value: 19.612
- type: ndcg_at_1000
value: 23.305
- type: ndcg_at_3
value: 12.879999999999999
- type: ndcg_at_5
value: 13.980999999999998
- type: precision_at_1
value: 10.84
- type: precision_at_10
value: 2.887
- type: precision_at_100
value: 0.599
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 6.171
- type: precision_at_5
value: 4.522
- type: recall_at_1
value: 8.823
- type: recall_at_10
value: 21.19
- type: recall_at_100
value: 40.843
- type: recall_at_1000
value: 68.118
- type: recall_at_3
value: 14.219000000000001
- type: recall_at_5
value: 17.061
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: None
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 14.841999999999999
- type: map_at_10
value: 19.807
- type: map_at_100
value: 20.646
- type: map_at_1000
value: 20.782
- type: map_at_3
value: 17.881
- type: map_at_5
value: 18.94
- type: mrr_at_1
value: 17.631
- type: mrr_at_10
value: 22.949
- type: mrr_at_100
value: 23.727
- type: mrr_at_1000
value: 23.829
- type: mrr_at_3
value: 20.896
- type: mrr_at_5
value: 21.964
- type: ndcg_at_1
value: 17.631
- type: ndcg_at_10
value: 23.544999999999998
- type: ndcg_at_100
value: 28.042
- type: ndcg_at_1000
value: 31.66
- type: ndcg_at_3
value: 19.697
- type: ndcg_at_5
value: 21.467
- type: precision_at_1
value: 17.631
- type: precision_at_10
value: 4.039000000000001
- type: precision_at_100
value: 0.7080000000000001
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 8.831
- type: precision_at_5
value: 6.381
- type: recall_at_1
value: 14.841999999999999
- type: recall_at_10
value: 32.144
- type: recall_at_100
value: 52.896
- type: recall_at_1000
value: 79.3
- type: recall_at_3
value: 21.64
- type: recall_at_5
value: 26.127
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: None
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 15.182
- type: map_at_10
value: 21.423000000000002
- type: map_at_100
value: 22.766000000000002
- type: map_at_1000
value: 22.966
- type: map_at_3
value: 19.096
- type: map_at_5
value: 20.514
- type: mrr_at_1
value: 18.379
- type: mrr_at_10
value: 24.834999999999997
- type: mrr_at_100
value: 25.818
- type: mrr_at_1000
value: 25.893
- type: mrr_at_3
value: 22.628
- type: mrr_at_5
value: 24.032
- type: ndcg_at_1
value: 18.379
- type: ndcg_at_10
value: 25.766
- type: ndcg_at_100
value: 31.677
- type: ndcg_at_1000
value: 35.024
- type: ndcg_at_3
value: 22.027
- type: ndcg_at_5
value: 24.046
- type: precision_at_1
value: 18.379
- type: precision_at_10
value: 5.158
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 10.474
- type: precision_at_5
value: 7.983999999999999
- type: recall_at_1
value: 15.182
- type: recall_at_10
value: 34.008
- type: recall_at_100
value: 61.882000000000005
- type: recall_at_1000
value: 84.635
- type: recall_at_3
value: 23.3
- type: recall_at_5
value: 28.732999999999997
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: None
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 12.36
- type: map_at_10
value: 16.181
- type: map_at_100
value: 17.094
- type: map_at_1000
value: 17.214
- type: map_at_3
value: 14.442
- type: map_at_5
value: 15.348999999999998
- type: mrr_at_1
value: 13.678
- type: mrr_at_10
value: 17.636
- type: mrr_at_100
value: 18.575
- type: mrr_at_1000
value: 18.685
- type: mrr_at_3
value: 15.958
- type: mrr_at_5
value: 16.882
- type: ndcg_at_1
value: 13.678
- type: ndcg_at_10
value: 18.991
- type: ndcg_at_100
value: 23.967
- type: ndcg_at_1000
value: 27.473
- type: ndcg_at_3
value: 15.526000000000002
- type: ndcg_at_5
value: 17.148
- type: precision_at_1
value: 13.678
- type: precision_at_10
value: 3.031
- type: precision_at_100
value: 0.597
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 6.4079999999999995
- type: precision_at_5
value: 4.769
- type: recall_at_1
value: 12.36
- type: recall_at_10
value: 26.482
- type: recall_at_100
value: 49.922
- type: recall_at_1000
value: 76.983
- type: recall_at_3
value: 17.172
- type: recall_at_5
value: 21.152
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: None
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 8.464
- type: map_at_10
value: 14.78
- type: map_at_100
value: 16.436999999999998
- type: map_at_1000
value: 16.650000000000002
- type: map_at_3
value: 12.027000000000001
- type: map_at_5
value: 13.428999999999998
- type: mrr_at_1
value: 19.544
- type: mrr_at_10
value: 29.537999999999997
- type: mrr_at_100
value: 30.653000000000002
- type: mrr_at_1000
value: 30.708000000000002
- type: mrr_at_3
value: 25.798
- type: mrr_at_5
value: 28.072000000000003
- type: ndcg_at_1
value: 19.544
- type: ndcg_at_10
value: 21.953
- type: ndcg_at_100
value: 29.188
- type: ndcg_at_1000
value: 33.222
- type: ndcg_at_3
value: 16.89
- type: ndcg_at_5
value: 18.825
- type: precision_at_1
value: 19.544
- type: precision_at_10
value: 7.277
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 12.834000000000001
- type: precision_at_5
value: 10.488999999999999
- type: recall_at_1
value: 8.464
- type: recall_at_10
value: 27.762999999999998
- type: recall_at_100
value: 53.147999999999996
- type: recall_at_1000
value: 76.183
- type: recall_at_3
value: 15.642
- type: recall_at_5
value: 20.593
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: None
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 5.676
- type: map_at_10
value: 11.847000000000001
- type: map_at_100
value: 16.875999999999998
- type: map_at_1000
value: 18.081
- type: map_at_3
value: 8.512
- type: map_at_5
value: 9.956
- type: mrr_at_1
value: 48.0
- type: mrr_at_10
value: 57.928000000000004
- type: mrr_at_100
value: 58.52
- type: mrr_at_1000
value: 58.544
- type: mrr_at_3
value: 55.333
- type: mrr_at_5
value: 56.958
- type: ndcg_at_1
value: 35.875
- type: ndcg_at_10
value: 27.221
- type: ndcg_at_100
value: 31.808999999999997
- type: ndcg_at_1000
value: 39.199
- type: ndcg_at_3
value: 30.274
- type: ndcg_at_5
value: 28.785
- type: precision_at_1
value: 48.0
- type: precision_at_10
value: 23.65
- type: precision_at_100
value: 7.818
- type: precision_at_1000
value: 1.651
- type: precision_at_3
value: 35.833
- type: precision_at_5
value: 31.0
- type: recall_at_1
value: 5.676
- type: recall_at_10
value: 16.619
- type: recall_at_100
value: 39.422000000000004
- type: recall_at_1000
value: 64.095
- type: recall_at_3
value: 9.608
- type: recall_at_5
value: 12.277000000000001
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: None
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 49.185
- type: f1
value: 44.87033813298503
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: None
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 18.904
- type: map_at_10
value: 28.435
- type: map_at_100
value: 29.498
- type: map_at_1000
value: 29.567
- type: map_at_3
value: 25.319000000000003
- type: map_at_5
value: 27.13
- type: mrr_at_1
value: 20.116999999999997
- type: mrr_at_10
value: 30.112
- type: mrr_at_100
value: 31.155
- type: mrr_at_1000
value: 31.213
- type: mrr_at_3
value: 26.895000000000003
- type: mrr_at_5
value: 28.793000000000003
- type: ndcg_at_1
value: 20.116999999999997
- type: ndcg_at_10
value: 34.244
- type: ndcg_at_100
value: 39.409
- type: ndcg_at_1000
value: 41.195
- type: ndcg_at_3
value: 27.872000000000003
- type: ndcg_at_5
value: 31.128
- type: precision_at_1
value: 20.116999999999997
- type: precision_at_10
value: 5.534
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 12.076
- type: precision_at_5
value: 8.965
- type: recall_at_1
value: 18.904
- type: recall_at_10
value: 50.858000000000004
- type: recall_at_100
value: 74.42
- type: recall_at_1000
value: 88.023
- type: recall_at_3
value: 33.675
- type: recall_at_5
value: 41.449999999999996
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: None
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 8.892
- type: map_at_10
value: 14.363000000000001
- type: map_at_100
value: 15.75
- type: map_at_1000
value: 15.959000000000001
- type: map_at_3
value: 12.25
- type: map_at_5
value: 13.286999999999999
- type: mrr_at_1
value: 16.821
- type: mrr_at_10
value: 23.425
- type: mrr_at_100
value: 24.556
- type: mrr_at_1000
value: 24.637
- type: mrr_at_3
value: 20.885
- type: mrr_at_5
value: 22.127
- type: ndcg_at_1
value: 16.821
- type: ndcg_at_10
value: 19.412
- type: ndcg_at_100
value: 25.836
- type: ndcg_at_1000
value: 30.131000000000004
- type: ndcg_at_3
value: 16.198
- type: ndcg_at_5
value: 17.185
- type: precision_at_1
value: 16.821
- type: precision_at_10
value: 5.556
- type: precision_at_100
value: 1.1820000000000002
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 10.545
- type: precision_at_5
value: 8.056000000000001
- type: recall_at_1
value: 8.892
- type: recall_at_10
value: 25.249
- type: recall_at_100
value: 50.263000000000005
- type: recall_at_1000
value: 76.43299999999999
- type: recall_at_3
value: 15.094
- type: recall_at_5
value: 18.673000000000002
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: None
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 20.831
- type: map_at_10
value: 29.959999999999997
- type: map_at_100
value: 30.959999999999997
- type: map_at_1000
value: 31.069000000000003
- type: map_at_3
value: 27.453
- type: map_at_5
value: 28.838
- type: mrr_at_1
value: 41.661
- type: mrr_at_10
value: 49.647999999999996
- type: mrr_at_100
value: 50.304
- type: mrr_at_1000
value: 50.352
- type: mrr_at_3
value: 47.403
- type: mrr_at_5
value: 48.657000000000004
- type: ndcg_at_1
value: 41.661
- type: ndcg_at_10
value: 37.854
- type: ndcg_at_100
value: 42.248999999999995
- type: ndcg_at_1000
value: 44.756
- type: ndcg_at_3
value: 33.243
- type: ndcg_at_5
value: 35.467
- type: precision_at_1
value: 41.661
- type: precision_at_10
value: 8.386000000000001
- type: precision_at_100
value: 1.1900000000000002
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 21.022
- type: precision_at_5
value: 14.377
- type: recall_at_1
value: 20.831
- type: recall_at_10
value: 41.931000000000004
- type: recall_at_100
value: 59.507
- type: recall_at_1000
value: 76.232
- type: recall_at_3
value: 31.533
- type: recall_at_5
value: 35.942
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: None
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 70.2136
- type: ap
value: 64.38274263735502
- type: f1
value: 70.02577813394484
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: None
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 7.542999999999999
- type: map_at_10
value: 13.229
- type: map_at_100
value: 14.283999999999999
- type: map_at_1000
value: 14.396
- type: map_at_3
value: 11.139000000000001
- type: map_at_5
value: 12.259
- type: mrr_at_1
value: 7.808
- type: mrr_at_10
value: 13.577
- type: mrr_at_100
value: 14.625
- type: mrr_at_1000
value: 14.732000000000001
- type: mrr_at_3
value: 11.464
- type: mrr_at_5
value: 12.584999999999999
- type: ndcg_at_1
value: 7.779
- type: ndcg_at_10
value: 16.793
- type: ndcg_at_100
value: 22.564
- type: ndcg_at_1000
value: 25.799
- type: ndcg_at_3
value: 12.431000000000001
- type: ndcg_at_5
value: 14.442
- type: precision_at_1
value: 7.779
- type: precision_at_10
value: 2.894
- type: precision_at_100
value: 0.59
- type: precision_at_1000
value: 0.087
- type: precision_at_3
value: 5.454
- type: precision_at_5
value: 4.278
- type: recall_at_1
value: 7.542999999999999
- type: recall_at_10
value: 27.907
- type: recall_at_100
value: 56.13399999999999
- type: recall_at_1000
value: 81.877
- type: recall_at_3
value: 15.878999999999998
- type: recall_at_5
value: 20.726
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: None
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.68490652074783
- type: f1
value: 90.90009716586837
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: None
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 61.33150934792522
- type: f1
value: 42.414995407585955
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: None
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.29455279085406
- type: f1
value: 64.0154454215856
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: None
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.91055817081372
- type: f1
value: 72.79505573377739
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: None
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.478611587568
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: None
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.395691978780366
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: None
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.75504868917307
- type: mrr
value: 31.723412508217553
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: None
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 4.739
- type: map_at_10
value: 9.419
- type: map_at_100
value: 12.209
- type: map_at_1000
value: 13.653
- type: map_at_3
value: 7.292999999999999
- type: map_at_5
value: 8.291
- type: mrr_at_1
value: 38.7
- type: mrr_at_10
value: 47.934
- type: mrr_at_100
value: 48.605
- type: mrr_at_1000
value: 48.646
- type: mrr_at_3
value: 45.717
- type: mrr_at_5
value: 47.157
- type: ndcg_at_1
value: 36.842000000000006
- type: ndcg_at_10
value: 28.077
- type: ndcg_at_100
value: 26.83
- type: ndcg_at_1000
value: 36.272
- type: ndcg_at_3
value: 32.429
- type: ndcg_at_5
value: 30.823
- type: precision_at_1
value: 38.7
- type: precision_at_10
value: 20.774
- type: precision_at_100
value: 7.331
- type: precision_at_1000
value: 2.085
- type: precision_at_3
value: 30.341
- type: precision_at_5
value: 26.502
- type: recall_at_1
value: 4.739
- type: recall_at_10
value: 13.065999999999999
- type: recall_at_100
value: 28.875
- type: recall_at_1000
value: 62.751000000000005
- type: recall_at_3
value: 8.338
- type: recall_at_5
value: 10.211
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: None
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 10.764
- type: map_at_10
value: 18.582
- type: map_at_100
value: 19.953000000000003
- type: map_at_1000
value: 20.049
- type: map_at_3
value: 15.551
- type: map_at_5
value: 17.143
- type: mrr_at_1
value: 12.283
- type: mrr_at_10
value: 20.507
- type: mrr_at_100
value: 21.724
- type: mrr_at_1000
value: 21.801000000000002
- type: mrr_at_3
value: 17.434
- type: mrr_at_5
value: 19.097
- type: ndcg_at_1
value: 12.254
- type: ndcg_at_10
value: 23.818
- type: ndcg_at_100
value: 30.652
- type: ndcg_at_1000
value: 33.25
- type: ndcg_at_3
value: 17.577
- type: ndcg_at_5
value: 20.43
- type: precision_at_1
value: 12.254
- type: precision_at_10
value: 4.492999999999999
- type: precision_at_100
value: 0.8370000000000001
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 8.333
- type: precision_at_5
value: 6.593
- type: recall_at_1
value: 10.764
- type: recall_at_10
value: 38.279999999999994
- type: recall_at_100
value: 69.77600000000001
- type: recall_at_1000
value: 89.75
- type: recall_at_3
value: 21.608
- type: recall_at_5
value: 28.247
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: None
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.238
- type: map_at_10
value: 79.61
- type: map_at_100
value: 80.339
- type: map_at_1000
value: 80.366
- type: map_at_3
value: 76.572
- type: map_at_5
value: 78.45100000000001
- type: mrr_at_1
value: 76.18
- type: mrr_at_10
value: 83.319
- type: mrr_at_100
value: 83.492
- type: mrr_at_1000
value: 83.49499999999999
- type: mrr_at_3
value: 82.002
- type: mrr_at_5
value: 82.88
- type: ndcg_at_1
value: 76.24
- type: ndcg_at_10
value: 84.048
- type: ndcg_at_100
value: 85.76700000000001
- type: ndcg_at_1000
value: 85.989
- type: ndcg_at_3
value: 80.608
- type: ndcg_at_5
value: 82.45
- type: precision_at_1
value: 76.24
- type: precision_at_10
value: 12.775
- type: precision_at_100
value: 1.498
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.107
- type: precision_at_5
value: 23.198
- type: recall_at_1
value: 66.238
- type: recall_at_10
value: 92.655
- type: recall_at_100
value: 98.79599999999999
- type: recall_at_1000
value: 99.914
- type: recall_at_3
value: 82.818
- type: recall_at_5
value: 87.985
- type: map_at_1
value: 3.3029999999999995
- type: map_at_10
value: 8.534
- type: map_at_100
value: 10.269
- type: map_at_1000
value: 10.569
- type: map_at_3
value: 6.02
- type: map_at_5
value: 7.3
- type: mrr_at_1
value: 16.2
- type: mrr_at_10
value: 26.048
- type: mrr_at_100
value: 27.229
- type: mrr_at_1000
value: 27.307
- type: mrr_at_3
value: 22.8
- type: mrr_at_5
value: 24.555
- type: ndcg_at_1
value: 16.2
- type: ndcg_at_10
value: 15.152
- type: ndcg_at_100
value: 22.692999999999998
- type: ndcg_at_1000
value: 28.283
- type: ndcg_at_3
value: 13.831
- type: ndcg_at_5
value: 12.383
- type: precision_at_1
value: 16.2
- type: precision_at_10
value: 8.15
- type: precision_at_100
value: 1.921
- type: precision_at_1000
value: 0.326
- type: precision_at_3
value: 13.167000000000002
- type: precision_at_5
value: 11.200000000000001
- type: recall_at_1
value: 3.3029999999999995
- type: recall_at_10
value: 16.463
- type: recall_at_100
value: 38.968
- type: recall_at_1000
value: 66.208
- type: recall_at_3
value: 8.023
- type: recall_at_5
value: 11.338
- type: map_at_1
value: 0.154
- type: map_at_10
value: 1.216
- type: map_at_100
value: 6.401
- type: map_at_1000
value: 16.882
- type: map_at_3
value: 0.418
- type: map_at_5
value: 0.7040000000000001
- type: mrr_at_1
value: 62.0
- type: mrr_at_10
value: 75.319
- type: mrr_at_100
value: 75.435
- type: mrr_at_1000
value: 75.435
- type: mrr_at_3
value: 73.333
- type: mrr_at_5
value: 75.033
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 54.176
- type: ndcg_at_100
value: 40.741
- type: ndcg_at_1000
value: 38.385000000000005
- type: ndcg_at_3
value: 57.676
- type: ndcg_at_5
value: 57.867000000000004
- type: precision_at_1
value: 62.0
- type: precision_at_10
value: 57.8
- type: precision_at_100
value: 42.68
- type: precision_at_1000
value: 18.478
- type: precision_at_3
value: 61.333000000000006
- type: precision_at_5
value: 63.6
- type: recall_at_1
value: 0.154
- type: recall_at_10
value: 1.468
- type: recall_at_100
value: 9.541
- type: recall_at_1000
value: 37.218
- type: recall_at_3
value: 0.46299999999999997
- type: recall_at_5
value: 0.8340000000000001
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: None
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.96790773164943
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: None
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 51.114201492992976
- task:
type: STS
dataset:
name: MTEB SICK-R
type: None
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 78.21858054391086
- type: cos_sim_spearman
value: 67.3365618536054
- type: euclidean_pearson
value: 72.40963340986721
- type: euclidean_spearman
value: 67.336666949735
- type: manhattan_pearson
value: 72.14690674984998
- type: manhattan_spearman
value: 67.32922820760339
- task:
type: STS
dataset:
name: MTEB STS12
type: None
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 76.49003508454533
- type: cos_sim_spearman
value: 66.84152843358724
- type: euclidean_pearson
value: 72.00905568823764
- type: euclidean_spearman
value: 66.8427445518875
- type: manhattan_pearson
value: 71.33279968302561
- type: manhattan_spearman
value: 66.63248621937453
- task:
type: STS
dataset:
name: MTEB STS13
type: None
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 78.26330596241046
- type: cos_sim_spearman
value: 78.99008985666835
- type: euclidean_pearson
value: 78.51141445278363
- type: euclidean_spearman
value: 78.99010203692151
- type: manhattan_pearson
value: 78.06877144241578
- type: manhattan_spearman
value: 78.49232451344044
- task:
type: STS
dataset:
name: MTEB STS14
type: None
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.14106714330973
- type: cos_sim_spearman
value: 74.82820560037015
- type: euclidean_pearson
value: 77.62758758774916
- type: euclidean_spearman
value: 74.82819590900333
- type: manhattan_pearson
value: 77.48877257108047
- type: manhattan_spearman
value: 74.74789870583966
- task:
type: STS
dataset:
name: MTEB STS15
type: None
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 82.48914773660643
- type: cos_sim_spearman
value: 83.00065347429336
- type: euclidean_pearson
value: 82.64658342996727
- type: euclidean_spearman
value: 83.00065194339217
- type: manhattan_pearson
value: 82.55463149184536
- type: manhattan_spearman
value: 82.8911825343332
- task:
type: STS
dataset:
name: MTEB STS16
type: None
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 77.784876359328
- type: cos_sim_spearman
value: 78.360543979936
- type: euclidean_pearson
value: 77.73937696752135
- type: euclidean_spearman
value: 78.36053665222538
- type: manhattan_pearson
value: 77.56126269274264
- type: manhattan_spearman
value: 78.18717393504727
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: None
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.63171981287952
- type: cos_sim_spearman
value: 87.49687143000429
- type: euclidean_pearson
value: 86.37853734517222
- type: euclidean_spearman
value: 87.4977435828658
- type: manhattan_pearson
value: 86.40342805532555
- type: manhattan_spearman
value: 87.57812091712806
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: None
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 60.00736823914696
- type: cos_sim_spearman
value: 60.59580774316736
- type: euclidean_pearson
value: 61.893600849213094
- type: euclidean_spearman
value: 60.59580774316736
- type: manhattan_pearson
value: 61.43013801720455
- type: manhattan_spearman
value: 59.92526461879062
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: None
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 80.58292387813594
- type: cos_sim_spearman
value: 78.85975762418589
- type: euclidean_pearson
value: 80.28122335716425
- type: euclidean_spearman
value: 78.85977608876506
- type: manhattan_pearson
value: 80.20419882971093
- type: manhattan_spearman
value: 78.79811621332709
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: None
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.54383068715617
- type: mrr
value: 93.62365031482678
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: None
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 39.111000000000004
- type: map_at_10
value: 47.686
- type: map_at_100
value: 48.722
- type: map_at_1000
value: 48.776
- type: map_at_3
value: 44.625
- type: map_at_5
value: 46.289
- type: mrr_at_1
value: 41.667
- type: mrr_at_10
value: 49.619
- type: mrr_at_100
value: 50.434
- type: mrr_at_1000
value: 50.482000000000006
- type: mrr_at_3
value: 46.833000000000006
- type: mrr_at_5
value: 48.317
- type: ndcg_at_1
value: 41.667
- type: ndcg_at_10
value: 52.819
- type: ndcg_at_100
value: 57.69
- type: ndcg_at_1000
value: 58.965
- type: ndcg_at_3
value: 46.857
- type: ndcg_at_5
value: 49.697
- type: precision_at_1
value: 41.667
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0070000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 18.333
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 39.111000000000004
- type: recall_at_10
value: 67.039
- type: recall_at_100
value: 89.767
- type: recall_at_1000
value: 99.467
- type: recall_at_3
value: 51.056000000000004
- type: recall_at_5
value: 57.99999999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: None
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72772277227723
- type: cos_sim_ap
value: 91.98542118937158
- type: cos_sim_f1
value: 85.91691995947316
- type: cos_sim_precision
value: 87.06365503080082
- type: cos_sim_recall
value: 84.8
- type: dot_accuracy
value: 99.72772277227723
- type: dot_ap
value: 91.98542118937158
- type: dot_f1
value: 85.91691995947316
- type: dot_precision
value: 87.06365503080082
- type: dot_recall
value: 84.8
- type: euclidean_accuracy
value: 99.72772277227723
- type: euclidean_ap
value: 91.98542118937158
- type: euclidean_f1
value: 85.91691995947316
- type: euclidean_precision
value: 87.06365503080082
- type: euclidean_recall
value: 84.8
- type: manhattan_accuracy
value: 99.72574257425742
- type: manhattan_ap
value: 91.96773898408213
- type: manhattan_f1
value: 85.8601327207759
- type: manhattan_precision
value: 87.69551616266945
- type: manhattan_recall
value: 84.1
- type: max_accuracy
value: 99.72772277227723
- type: max_ap
value: 91.98542118937158
- type: max_f1
value: 85.91691995947316
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: None
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 50.974351388709024
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: None
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.94724711190474
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: None
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 43.618618519378074
- type: mrr
value: 44.19061942959002
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: None
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.75942900919329
- type: cos_sim_spearman
value: 30.265779375382486
- type: dot_pearson
value: 29.759429009193283
- type: dot_spearman
value: 30.216316271647514
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: None
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.144
- type: map_at_10
value: 8.38
- type: map_at_100
value: 14.482000000000001
- type: map_at_1000
value: 16.179
- type: map_at_3
value: 3.821
- type: map_at_5
value: 5.96
- type: mrr_at_1
value: 26.531
- type: mrr_at_10
value: 41.501
- type: mrr_at_100
value: 42.575
- type: mrr_at_1000
value: 42.575
- type: mrr_at_3
value: 36.054
- type: mrr_at_5
value: 40.238
- type: ndcg_at_1
value: 21.429000000000002
- type: ndcg_at_10
value: 21.644
- type: ndcg_at_100
value: 35.427
- type: ndcg_at_1000
value: 47.116
- type: ndcg_at_3
value: 20.814
- type: ndcg_at_5
value: 22.783
- type: precision_at_1
value: 26.531
- type: precision_at_10
value: 21.224
- type: precision_at_100
value: 8.265
- type: precision_at_1000
value: 1.5959999999999999
- type: precision_at_3
value: 23.810000000000002
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.144
- type: recall_at_10
value: 15.278
- type: recall_at_100
value: 50.541000000000004
- type: recall_at_1000
value: 86.144
- type: recall_at_3
value: 5.056
- type: recall_at_5
value: 9.203
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: None
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 75.88100000000001
- type: ap
value: 17.210410808772743
- type: f1
value: 58.7851360197636
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: None
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.68024900962084
- type: f1
value: 59.95386992880734
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: None
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 41.55446050017461
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: None
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 82.32699529117244
- type: cos_sim_ap
value: 61.49148139881723
- type: cos_sim_f1
value: 59.31940298507462
- type: cos_sim_precision
value: 54.17666303162486
- type: cos_sim_recall
value: 65.54089709762533
- type: dot_accuracy
value: 82.32699529117244
- type: dot_ap
value: 61.49148139881723
- type: dot_f1
value: 59.31940298507462
- type: dot_precision
value: 54.17666303162486
- type: dot_recall
value: 65.54089709762533
- type: euclidean_accuracy
value: 82.32699529117244
- type: euclidean_ap
value: 61.49148139881723
- type: euclidean_f1
value: 59.31940298507462
- type: euclidean_precision
value: 54.17666303162486
- type: euclidean_recall
value: 65.54089709762533
- type: manhattan_accuracy
value: 82.44024557429815
- type: manhattan_ap
value: 61.57050440663527
- type: manhattan_f1
value: 59.36456916800594
- type: manhattan_precision
value: 55.8501977204001
- type: manhattan_recall
value: 63.35092348284961
- type: max_accuracy
value: 82.44024557429815
- type: max_ap
value: 61.57050440663527
- type: max_f1
value: 59.36456916800594
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: None
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.70714479760935
- type: cos_sim_ap
value: 83.52059059692118
- type: cos_sim_f1
value: 75.8043805261034
- type: cos_sim_precision
value: 72.40171000070083
- type: cos_sim_recall
value: 79.54265475823837
- type: dot_accuracy
value: 87.70714479760935
- type: dot_ap
value: 83.52059016767844
- type: dot_f1
value: 75.8043805261034
- type: dot_precision
value: 72.40171000070083
- type: dot_recall
value: 79.54265475823837
- type: euclidean_accuracy
value: 87.70714479760935
- type: euclidean_ap
value: 83.52059046795347
- type: euclidean_f1
value: 75.8043805261034
- type: euclidean_precision
value: 72.40171000070083
- type: euclidean_recall
value: 79.54265475823837
- type: manhattan_accuracy
value: 87.7187875965382
- type: manhattan_ap
value: 83.5377383098018
- type: manhattan_f1
value: 75.87021520062012
- type: manhattan_precision
value: 72.87102035028008
- type: manhattan_recall
value: 79.12688635663689
- type: max_accuracy
value: 87.7187875965382
- type: max_ap
value: 83.5377383098018
- type: max_f1
value: 75.87021520062012
---
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
mrzaizai2k/distilroberta-base-sentence-transformer-triplets | mrzaizai2k | sentence-similarity | [
"sentence-transformers",
"safetensors",
"distilbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:101762",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/distiluse-base-multilingual-cased-v2",
"base_model:finetune:sentence-transformers/distiluse-base-multilingual-cased-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,720 | 1,720 | 49 | 0 | ---
base_model: sentence-transformers/distiluse-base-multilingual-cased-v2
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:101762
- loss:TripletLoss
widget:
- source_sentence: How do I clean the screen of my Toshiba TV?
sentences:
- How can I clear screen overlay from my Samsung Galaxy 6?
- Why do police forces exist?
- What is the best way to clean a flat screen monitor?
- source_sentence: What was the first video you watched on YouTube?
sentences:
- What was the first Youtube video you ever watched?
- What was the first music video ever produced?
- What was the long term effect of Hitler's desire to exterminate the Jewish people?
- source_sentence: What should I do to recover my data from a hard disk?
sentences:
- How do I recover my deleted data files from a hard disk?
- What's the best Linux operating System distro for beginners?
- Formated Data Recovery – Recover Data from Memory Card, Disk Drive, USB, External
Drive?
- source_sentence: What are your personal top ten music albums of all time?
sentences:
- What are your top 10 favourite songs of all time?
- What are the Top 10 music albums of all time on your list?
- What stream should I take in 11th if I have to become an automobile engineer?
- source_sentence: What is the best website to learn coding independently?
sentences:
- What are some of the best website to learn programming from being a total beginner?
- What books do I need to read to learn more about Sufism?
- What is the best (and fastest) way to learn how to code (web development)?
---
# SentenceTransformer based on sentence-transformers/distiluse-base-multilingual-cased-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) <!-- at revision 03a0532331151aeb3e1d2e602ffad62bb212a38d -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("chibao24/distilroberta-base-sentence-transformer-triplets")
# Run inference
sentences = [
'What is the best website to learn coding independently?',
'What are some of the best website to learn programming from being a total beginner?',
'What is the best (and fastest) way to learn how to code (web development)?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 101,762 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.7 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.66 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.22 tokens</li><li>max: 84 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:-------------------------------------------------------------------------------|:----------------------------------------------------------------------|:------------------------------------------------------------|
| <code>What are the differences between "be made of" and "be made from"?</code> | <code>What's the difference between "made of" and "made from"?</code> | <code>What is the difference between make and craft?</code> |
| <code>How can we use the word "inertia" in a sentence?</code> | <code>How can the word "inertia" be used in a sentence?</code> | <code>What is inertia actually?</code> |
| <code>Who are the new (i.e. first-time) Top Question Writers for 2017?</code> | <code>Who are the top question writers for 2017?</code> | <code>Who are the 2016 Top Writers?</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.6281 | 500 | 4.2255 |
| 1.2563 | 1000 | 3.484 |
| 1.8844 | 1500 | 2.8611 |
| 2.5126 | 2000 | 2.4607 |
| 3.1407 | 2500 | 2.148 |
| 3.7688 | 3000 | 1.8583 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CRAFT"
] | Non_BioNLP |
RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2309.06085",
"arxiv:2101.09635",
"endpoints_compatible",
"region:us"
] | 1,722 | 1,722 | 127 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-8b-cpt-sea-lionv2-base - GGUF
- Model creator: https://huggingface.co/aisingapore/
- Original model: https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2-base/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-8b-cpt-sea-lionv2-base.Q2_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama3-8b-cpt-sea-lionv2-base.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama3-8b-cpt-sea-lionv2-base.Q3_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama3-8b-cpt-sea-lionv2-base.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama3-8b-cpt-sea-lionv2-base.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_0.gguf) | Q4_0 | 3.03GB |
| [llama3-8b-cpt-sea-lionv2-base.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_K_S.gguf) | Q4_K_S | 1.52GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_K.gguf) | Q4_K | 0.36GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_K_M.gguf) | Q4_K_M | 0.16GB |
| [llama3-8b-cpt-sea-lionv2-base.Q4_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q4_1.gguf) | Q4_1 | 0.01GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_0.gguf) | Q5_0 | 0.17GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_K_S.gguf) | Q5_K_S | 1.65GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama3-8b-cpt-sea-lionv2-base.Q5_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama3-8b-cpt-sea-lionv2-base.Q6_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama3-8b-cpt-sea-lionv2-base.Q8_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2-base-gguf/blob/main/llama3-8b-cpt-sea-lionv2-base.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
- id
- ta
- th
- vi
license: llama3
---
# Llama3 8B CPT SEA-LIONv2
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
This is the card for the Llama3 8B CPT SEA-LIONv2 base model which has undergone continued pre-training from the [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model.
SEA-LION stands for <i>Southeast Asian Languages In One Network</i>.
## Model Details
### Model Description
The continued pre-training data for Llama3 8B CPT SEA-LIONv2 base model encompasses approximately 48B tokens.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages:** English, Indonesian, Thai, Vietnamese, Tamil
- **License:** [Llama3 Community License](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
For tokenization, the model employs the default tokenizer used in Meta-Llama-3-8B-Instruct.
### Benchmark Performance
We evaluated Llama3 8B CPT SEA-LIONv2 base model on general language capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities in SEA languages, we employed the [BHASA evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
The evaluation was done **five-shot** with native prompts and only a sample of 100-1000 instances for each dataset was used as per the setting described in the paper.
**BHASA**
To be released soon
We also evaluated the model on English capabilities using tasks from the Open LLM Leaderboard.
**English**
| Model | ARC | BBH | HellaSwag | MMLU | GSM8k | Average |
| ----------------------------------------- |:-----:|:-----:|:---------:|:-----:|:-----:|:-------:|
| Qwen/Qwen2-7B | 61.86 | 53.10 | 80.63 | 70.45 | 78.09 | 68.83 |
| aisingapore/llama3-8b-cpt-sea-lionv2-base | 58.87 | 47.70 | 81.14 | 63.11 | 50.49 | 60.26 |
| meta-llama/Meta-Llama-3-8B | 57.85 | 46.09 | 81.89 | 65.10 | 45.34 | 59.25 |
| mistralai/Mistral-7B-v0.3 | 59.56 | 44.89 | 82.97 | 62.36 | 33.36 | 56.63 |
| Sail/Sailor-7B | 50.34 | 35.65 | 76.11 | 52.80 | 33.81 | 49.74 |
## Training Details
### Data
Llama3 8B CPT SEA-LIONv2 base model was continued pre-trained on 48B tokens of the following data:
| Data Source | Unique Tokens (B) | Multiplier | Total Tokens (B) | Percentage (%) |
|---------------------------|:-----------------:|:----------:|:----------------:|:--------------:|
| Dolma RefinedWeb - English| 7.650 | 1 | 7.650 | 15.90 |
| Dolma C4 - English | 1.160 | 1 | 1 | 9.21 |
| Dolma Reddit - English | 1.339 | 1 | 14.7 | 2.42 |
| Dolma Semantic Scholar | 0.959 | 1 | 2.9 | 2.79 |
| Dolma arXiv | 0.469 | 1 | 5.3 | 1.99 |
| Dolma StarCoder | 4.422 | 1 | 4.9 | 0.98 |
| SEA-LION Pile - Indonesian| 3.4 | 1 | 6.8 | 14.17 |
| Wiki* - Indonesian | 0.3 | 4 | 1.2 | 2.50 |
| SEA-LION Pile - Tamil | 5.6 | 1 | 5.6 | 11.67 |
| Wiki* + News - Tamil | 0.6 | 4 | 2.4 | 5.00 |
| SEA-LION Pile - Thai | 2.28 | 1 | 2.28 | 4.75 |
| WangChanBERTa - Thai | 5 | 1 | 5 | 10.42 |
| Wiki* - Thai | 0.18 | 4 | 0.72 | 1.50 |
| SEA-LION Pile - Vietnamese| 6.76 | 1 | 6.76 | 14.08 |
| Wiki* - Vietnamese | 0.31 | 4 | 1.24 | 2.58 |
Note:
- All token counts are counted using Llama3 tokenizer
- wiki* sources includes Wikipedia, Wiki Books, Wiki Source and Wiki Voyage
- Tamil news is sourced with permission from [Seithi](https://seithi.mediacorp.sg/)
### Infrastructure
Llama3 8B CPT SEA-LIONv2 was trained using [MosaicML Composer](https://github.com/mosaicml/composer)
on the following hardware:
| Training Details | Llama3 8B CPT SEA-LIONv2 |
|----------------------|:--------------------:|
| AWS EC2 p5d.24xlarge | 8 instances |
| Nvidia H100 80GB GPU | 64 |
| Training Duration | 2 days |
### Configuration
| HyperParameter | Llama3 8B CPT SEA-LIONv2 |
|-------------------|:--------------------:|
| Precision | bfloat16 |
| Optimizer | decoupled_adamw |
| Scheduler | weight_stable_decay |
| Learning Rate | 1.0e-5 |
| Global Batch Size | 512 |
| Micro Batch Size | 2 |
## The Team
Choa Esther<br>
Cheng Nicholas<br>
Huang Yuli<br>
Lau Wayne<br>
Lee Chwan Ren<br>
Leong Wai Yi<br>
Leong Wei Qi<br>
Li Yier<br>
Liu Bing Jie Darius<br>
Lovenia Holy<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Brandon<br>
Ong Tat-Wee David<br>
Ong Zhi Hao<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Teng Walter<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
## Acknowledgements
AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This the repository for the base model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claim, damages, or other liability
arising from the use of the released weights and codes.
## References
```bibtex
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] | Non_BioNLP |
Meosiuuubeo/paraphrase-multilingual-MiniLM-L12-v2-82000 | Meosiuuubeo | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:651725",
"loss:SoftmaxLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,733 | 1,733 | 5 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:651725
- loss:SoftmaxLoss
widget:
- source_sentence: Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào?
sentences:
- Chu kỳ kiểm định chất lượng giáo dục nghề nghiệp\n...\n2. Trường hợp cơ sở giáo
dục nghề nghiệp có ngành, nghề trọng điểm; chương trình đào tạo ngành, nghề trọng
điểm; cơ sở giáo dục nghề nghiệp và chương trình đào tạo các ngành, nghề phục
vụ yêu cầu công tác quản lý nhà nước phải thực hiện kiểm định chất lượng giáo
dục nghề nghiệp theo quy định tại điểm d khoản 3 Điều 65 của Luật Giáo dục nghề
nghiệp số 74/2014/QH13 ngày 27 tháng 11 năm 2014 nhưng không đạt tiêu chuẩn kiểm
định chất lượng giáo dục nghề nghiệp thì trong thời hạn 03 năm phải thực hiện
kiểm định lại.
- Vệ sinh môi trường, vệ sinh tòa nhà\n1. Trách nhiệm của các đơn vị, cán bộ, công
chức, viên chức, nhân viên và người lao động trong việc giữ gìn vệ sinh tại nơi
làm việc và khu vực công cộng:\na) Hàng ngày tự vệ sinh sàn nhà, bàn ghế, tủ,
các thiết bị được trang cấp và tổng vệ sinh phòng làm việc vào chiều thứ Sáu hàng
tuần;\nb) Có trách nhiệm thu gom rác thải trong phòng chuyển ra thùng rác đặt
tại các hành lang;\nc) Không đổ nước chè, cà phê, ….. xuống sàn nhà, hành lang,
tường nhà và khu vệ sinh;\nd) Nghiêm cấp hút thuốc lá trong phòng làm việc, phòng
họp, cầu thang máy, cầu thang bộ, tầng hầm;\nđ) Không khạc nhổ, bôi bẩn lên tường,
không vứt rác thải, gạt tàn thuốc lá, đầu lọc thuốc lá xuống sàn nhà và các khu
vực công cộng;\ne) Nghiêm cấm hái hoa, bẻ cành, dẫm lên thảm cỏ, nhổ cây trong
khuôn viên cơ quan.\ng) Nghiêm cấm mang chất độc hại vào cơ quan.\n…
- Nguyên tắc áp dụng\n1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối
tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một
mức phụ cấp ưu đãi theo nghề cao nhất.\n2. Công chức, viên chức đã hưởng phụ cấp
ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày
22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số
64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên
chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì
không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này.
- source_sentence: Số lượng thành viên Hội đồng khoa học và đào tạo là bao nhiêu?
sentences:
- 'Cấp Giấy chứng nhận chất lượng an toàn kỹ thuật và bảo vệ môi trường trong sản
xuất, lắp ráp ô tô, rơ moóc và sơ mi rơ moóc\n2.1. Trình tự thực hiện:\na) Nộp
hồ sơ TTHC:\n- Cơ sở sản xuất lập hồ sơ kiểm tra xe cơ giới theo quy định và nộp
đến Cục Đăng kiểm Việt Nam.\nb) Giải quyết TTHC:\n- Cục Đăng kiểm Việt Nam tiếp
nhận và kiểm tra thành phần hồ sơ kiểm tra xe cơ giới: nếu hồ sơ không đầy đủ
theo quy định thì hướng dẫn Cơ sở sản xuất hoàn thiện lại; Nếu hồ sơ đầy đủ theo
quy định thì thống nhất về thời gian và địa điểm thực hiện đánh giá điều kiện
kiểm tra chất lượng sản phẩm tại Cơ sở sản xuất;\n- Cục Đăng kiểm Việt Nam tiến
hành kiểm tra nội dung hồ sơ và thực hiện đánh giá điều kiện kiểm tra chất lượng
sản phẩm tại Cơ sở sản xuất theo quy định: Nếu chưa đạt yêu cầu thì thông báo
để Cơ sở sản xuất hoàn thiện lại; Nếu đạt yêu cầu thì cấp Giấy chứng nhận trong
thời hạn 03 ngày làm việc kể từ ngày kết thúc kiểm tra, đánh giá hồ sơ đầy đủ,
hợp lệ theo quy định và có kết quả đánh giá COP đạt yêu cầu;\n- Cơ sở sản xuất
nộp hồ sơ kiểm tra xe cơ giới và nhận kết quả trực tiếp tại trụ sở Cục Đăng kiểm
Việt Nam hoặc qua hệ thống bưu chính hoặc qua hệ thống dịch vụ công trực tuyến
hoặc qua hình thức phù hợp khác.\n...'
- Phiên họp Hội đồng khoa học\n1. Hội đồng khoa học họp định kỳ 06 tháng/01 lần.
Các phiên họp định kỳ phải có ít nhất 2/3 tổng số thành viên của Hội đồng khoa
học tham dự.\n2. Phiên họp đột xuất của Hội đồng khoa học được triệu tập theo
quyết định của Chủ tịch và phải có trên 1/2 số thành viên của Hội đồng khoa học
tham dự.\n3. Viện trưởng VKSND tối cao tham dự phiên họp của Hội đồng khoa học
khi thấy cần thiết.\n4. Tùy thuộc vào nội dung chương trình phiên họp, Chủ tịch
Hội đồng khoa học có thể quyết định mời các nhà khoa học trong và ngoài ngành
KSND tham gia phiên họp.\n5. Nội dung phiên họp, các tài liệu liên quan đến phiên
họp của Hội đồng khoa học phải được thông báo hoặc chuyển cho các Thành viên chậm
nhất là 3 ngày làm việc trước ngày họp, trừ trường hợp đột xuất.\n6. Hội đồng
khoa học thảo luận dân chủ, tập thể, công khai, quyết định theo đa số về những
vấn đề thuộc nội dung phiên họp và những vấn đề do Chủ tịch Hội đồng khoa học
nêu ra hoặc do các Thành viên đề nghị và được Chủ tịch Hội đồng khoa học chấp
thuận.\nChủ tịch Hội đồng khoa học chủ trì thảo luận và kết luận tại phiên họp.
Đối với những vấn đề phức tạp còn nhiều ý kiến khác nhau, Hội đồng khoa học tiến
hành biểu quyết. Những vấn đề được biểu quyết đạt trên 2/3 số phiếu của thành
viên có mặt hoặc trên 50% tổng số thành viên Hội đồng được coi là ý kiến chính
thức của Hội đồng khoa học. Các ý kiến khác được bảo lưu, ghi vào biên bản cuộc
họp.
- Hồ sơ, thủ tục công nhận liệt sĩ\n1. Người khi hy sinh đang thuộc quân đội, công
an quản lý thì Bộ Quốc phòng, Bộ Công an chịu trách nhiệm:\na) Hướng dẫn về quy
trình lập hồ sơ đề nghị công nhận liệt sĩ theo quy định.\nb) Có văn bản đề nghị
kèm hồ sơ gửi Bộ Lao động - Thương binh và Xã hội thẩm định trong thời gian không
quá 50 ngày kể từ ngày cơ quan, đơn vị trực tiếp quản lý người hy sinh xác lập,
hoàn thiện các giấy tờ quy định tại Điều 17 Nghị định này.
- source_sentence: Ban Tài chính Văn phòng Kiểm toán nhà nước thực hiện những chức
năng gì?
sentences:
- 'Tiếp nhận hồ sơ và trả kết quả\n...\n2.2.4. Lao động nam hoặc người chồng của
lao động nữ mang thai hộ nghỉ việc khi vợ sinh con: Bản sao giấy chứng sinh hoặc
bản sao giấy khai sinh hoặc trích lục khai sinh của con; trường hợp sinh con phải
phẫu thuật hoặc sinh con dưới 32 tuần tuổi mà giấy chứng sinh không thể hiện thì
có thêm giấy tờ của cơ sở khám bệnh, chữa bệnh thể hiện việc sinh con phải phẫu
thuật, sinh con dưới 32 tuần tuổi. Trường hợp con chết sau khi sinh mà chưa được
cấp giấy chứng sinh thì thay bằng trích sao hoặc tóm tắt hồ sơ bệnh án hoặc giấy
ra viện của người mẹ hoặc của lao động nữ mang thai hộ thể hiện con chết…'
- Việc tự giám sát chất lượng dịch vụ viễn thông của doanh nghiệp viễn thông\n1.
Các doanh nghiệp viễn thông được Bộ Thông tin và Truyền thông cấp giấy phép kinh
doanh dịch vụ viễn thông phải thường xuyên tự giám sát chất lượng dịch vụ đối
với tất cả các dịch vụ thuộc “Danh mục dịch vụ viễn thông bắt buộc quản lý chất
lượng” mà mình cung cấp.\n2. Trong trường hợp dịch vụ mà mình cung cấp có sự cố
thì doanh nghiệp viễn thông phải thực hiện báo cáo đột xuất như quy định tại Khoản
3 Điều 8 của Thông tư này.
- Cục Quản lý, giám sát bảo hiểm; Cục Quản lý Công sản; Cục Quản lý Giá; Cục Quản
lý Nợ và Tài chính đối ngoại; Cục Quản lý, giám sát Kế toán, Kiểm toán; Cục Quản
lý Công sản; Cục Tài chính doanh nghiệp và Vụ Tài chính ngân hàng chủ trì phối
hợp với Cục Tin học & Thống kê Tài chính xây dựng quy trình điện tử từng thủ tục
hành chính theo phạm vi quản lý đối với danh mục thủ tục hành chính để thực hiện
tích hợp trên Hệ thống thông tin Một cửa điện tử của Bộ Tài chính.
- source_sentence: Điều kiện để Giám đốc Học viện An ninh nhân dân được thăng cấp
bậc hàm trước thời hạn như thế nào?
sentences:
- Mức độ tự chủ và trách nhiệm\n- Có ý thức và tác phong nghề nghiệp đúng chuẩn
mực, có năng lực thực hiện công việc được giao; phương pháp làm việc khoa học,
biết phân tích và giải quyết các vấn đề mới về lĩnh vực chuyên môn nghề;\n- Gắn
bó nghề nghiệp; nghiêm chỉnh chấp hành quy chế, quy định của cơ quan, doanh nghiệp,
nơi đang công tác với ý thức tổ chức kỉ luật và tinh thần trách nhiệm cao trong
công việc;\n- Lập được các biện pháp an toàn và đảm bảo an toàn, vệ sinh lao động
trong quá trình làm việc; có ý thức trách nhiệm công dân, thái độ và đạo đức nghề
nghiệp đúng đắn, sẵn sàng nhận nhiệm vụ; tự tin, cầu tiến trong công việc; hợp
tác, thân thiện, khiêm tốn trong các mối quan hệ;\n- Tự chịu trách nhiệm về chất
lượng đối với kết quả công việc, sản phẩm do mình đảm nhiệm theo các tiêu chuẩn
và chịu một phần trách nhiệm đối với kết quả công việc, sản phẩm của tổ, nhóm;
- Tổ chức bộ máy\n...\n5. Tổng cục Hải quan có thể biệt phái công chức từ các đơn
vị thuộc và trực thuộc Tổng cục để bổ sung cán bộ chủ chốt, cán bộ kỹ thuật có
năng lực, kinh nghiệm cho Ban Quản lý dự án đầu tư xây dựng chuyên ngành của Tổng
cục Hải quan. Thời hạn biệt phái các công chức không quá 03 năm, trường hợp quá
03 năm mà chưa hoàn thành dự án thì Tổng cục Hải quan xem xét quyết định bổ sung
thời gian biệt phái.\nNhân sự tuyển dụng mới của Ban Quản lý dự án đầu tư xây
dựng chuyên ngành của Tổng cục Hải quan là viên chức hoặc hợp đồng lao động, thực
hiện theo quy định về chế độ tiền lương và các chế độ, chính sách đối với viên
chức và người lao động.\n...
- Biệt phái công chức\n...\n6. Không thực hiện biệt phái công chức nữ đang mang
thai hoặc nuôi con dưới 36 tháng tuổi.
- source_sentence: Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức,
viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?
sentences:
- Nhiệm vụ của giáo viên\n1. Thực hiện nhiệm vụ tổ chức các hoạt động dạy học, giáo
dục theo kế hoạch giáo dục của nhà trường và kế hoạch giáo dục của tổ chuyên môn;
quản lý học sinh trong các hoạt động giáo dục do nhà trường tổ chức; tham gia
các hoạt động chuyên môn; chịu trách nhiệm về chất lượng, hiệu quả giáo dục.\n2.
Trau dồi đạo đức, nêu cao tinh thần trách nhiệm, giữ gìn phẩm chất, danh dự, uy
tín của nhà giáo; gương mẫu trước học sinh; thương yêu, đối xử công bằng và tôn
trọng nhân cách của học sinh; bảo vệ các quyền và lợi ích chính đáng của học sinh;
đoàn kết, giúp đỡ đồng nghiệp.\n3. Học tập, rèn luyện để nâng cao sức khỏe, trình
độ chính trị, chuyên môn, nghiệp vụ, đổi mới phương pháp dạy học, giáo dục.\n4.
Tham gia tập huấn, bồi dưỡng chuyên môn, nghiệp vụ.\n5. Tham gia công tác phổ
cập giáo dục trung học cơ sở ở địa phương.\n6. Thực hiện nghĩa vụ công dân, các
quy định của pháp luật và của ngành Giáo dục, các quyết định của hiệu trưởng;
thực hiện nhiệm vụ do hiệu trưởng phân công, chịu sự kiểm tra, đánh giá của hiệu
trưởng và các cấp quản lý giáo dục.\n7. Phối hợp với Đội Thiếu niên Tiền phong
Hồ Chí Minh, Đoàn Thanh niên Cộng sản Hồ Chí Minh, Hội Liên hiệp Thanh niên Việt
Nam, gia đình học sinh và các tổ chức xã hội liên quan để tổ chức hoạt động giáo
dục.\n8. Thực hiện các nhiệm vụ khác theo quy định của pháp luật.
- “Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong
trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP
ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \n1. Máy
PCR. \n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \n3. Test kít
xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \n4. Máy thở chức năng
cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng
cao, máy thở xách tay. \n5. Máy lọc máu liên tục. \n6. Máy X-Quang di động. \n7.
Máy đo khí máu (đo được điện giải, lactat, hematocrite). \n8. Máy theo dõi bệnh
nhân>5 thông số. \n9. Bơm tiêm điện; Bơm truyền dịch. \n10. Máy phá rung tim có
tạo nhịp. \n11. Máy đo thời gian đông máu. \n12. Máy đo huyết động.”
- Thời điểm đánh giá xếp loại chất lượng hằng năm\n...\n2. Căn cứ tình hình thực
tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống
nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất
lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo
đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\n3. Tại thời điểm đánh giá,
xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ
chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm
làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ
được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá,
xếp loại chất lượng theo quy định của pháp luật và Quy chế này.
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức, viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?',
'Thời điểm đánh giá xếp loại chất lượng hằng năm\\n...\\n2. Căn cứ tình hình thực tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\\n3. Tại thời điểm đánh giá, xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá, xếp loại chất lượng theo quy định của pháp luật và Quy chế này.',
'“Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \\n1. Máy PCR. \\n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \\n3. Test kít xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \\n4. Máy thở chức năng cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng cao, máy thở xách tay. \\n5. Máy lọc máu liên tục. \\n6. Máy X-Quang di động. \\n7. Máy đo khí máu (đo được điện giải, lactat, hematocrite). \\n8. Máy theo dõi bệnh nhân>5 thông số. \\n9. Bơm tiêm điện; Bơm truyền dịch. \\n10. Máy phá rung tim có tạo nhịp. \\n11. Máy đo thời gian đông máu. \\n12. Máy đo huyết động.”',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 651,725 training samples
* Columns: <code>queries</code>, <code>corpus</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | queries | corpus | score |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 9 tokens</li><li>mean: 24.71 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 121.6 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~43.80%</li><li>1: ~37.00%</li><li>2: ~19.20%</li></ul> |
* Samples:
| queries | corpus | score |
|:------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Người học ngành quản lý khai thác công trình thủy lợi trình độ cao đẳng phải có khả năng học tập và nâng cao trình độ như thế nào?</code> | <code>Khả năng học tập, nâng cao trình độ\n- Khối lượng khối lượng kiến thức tối thiểu, yêu cầu về năng lực mà người học phải đạt được sau khi tốt nghiệp ngành, nghề Dược trình độ cao đẳng có thể tiếp tục phát triển ở các trình độ cao hơn;\n- Người học sau tốt nghiệp có năng lực tự học, tự cập nhật những tiến bộ khoa học công nghệ trong phạm vi ngành, nghề để nâng cao trình độ hoặc học liên thông lên trình độ cao hơn trong cùng ngành nghề hoặc trong nhóm ngành, nghề hoặc trong cùng lĩnh vực đào tạo.</code> | <code>2</code> |
| <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nTrong phạm vi điều chỉnh của văn bản quy phạm pháp luật:\n1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.\n2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.\n3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> | <code>2</code> |
| <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Mục đích lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nLồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật (sau đây gọi tắt là văn bản) là một biện pháp để thực hiện mục tiêu bình đẳng giới, xóa bỏ phân biệt đối xử về giới, bảo đảm quyền, lợi ích hợp pháp, phù hợp với đặc thù của mỗi giới; tạo cơ hội phát triển như nhau cho nam và nữ trong các lĩnh vực của đời sống xã hội và gia đình; bảo đảm bình đẳng giới thực chất giữa nam và nữ.</code> | <code>1</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3.0
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:-----:|:-------------:|
| 0.0061 | 500 | 1.0473 |
| 0.0123 | 1000 | 1.0447 |
| 0.0184 | 1500 | 1.0383 |
| 0.0246 | 2000 | 1.0395 |
| 0.0307 | 2500 | 1.0436 |
| 0.0368 | 3000 | 1.0375 |
| 0.0430 | 3500 | 1.0189 |
| 0.0491 | 4000 | 1.0282 |
| 0.0552 | 4500 | 1.0355 |
| 0.0614 | 5000 | 1.0286 |
| 0.0675 | 5500 | 1.0264 |
| 0.0737 | 6000 | 1.0174 |
| 0.0798 | 6500 | 1.0238 |
| 0.0859 | 7000 | 1.0217 |
| 0.0921 | 7500 | 1.0203 |
| 0.0982 | 8000 | 1.0201 |
| 0.1043 | 8500 | 1.0266 |
| 0.1105 | 9000 | 1.0379 |
| 0.1166 | 9500 | 1.0367 |
| 0.1228 | 10000 | 1.0384 |
| 0.1289 | 10500 | 1.0291 |
| 0.1350 | 11000 | 1.0362 |
| 0.1412 | 11500 | 1.0354 |
| 0.1473 | 12000 | 1.0204 |
| 0.1534 | 12500 | 1.0401 |
| 0.1596 | 13000 | 1.0237 |
| 0.1657 | 13500 | 1.0271 |
| 0.1719 | 14000 | 1.0235 |
| 0.1780 | 14500 | 1.0329 |
| 0.1841 | 15000 | 1.0474 |
| 0.1903 | 15500 | 1.0547 |
| 0.1964 | 16000 | 1.0557 |
| 0.2025 | 16500 | 1.0626 |
| 0.2087 | 17000 | 1.0551 |
| 0.2148 | 17500 | 1.0526 |
| 0.2210 | 18000 | 1.125 |
| 0.2271 | 18500 | 1.2996 |
| 0.2332 | 19000 | 1.0703 |
| 0.2394 | 19500 | 1.0601 |
| 0.2455 | 20000 | 1.0835 |
| 0.2516 | 20500 | 1.0583 |
| 0.2578 | 21000 | 1.141 |
| 0.2639 | 21500 | 1.0802 |
| 0.2701 | 22000 | 1.0589 |
| 0.2762 | 22500 | 1.086 |
| 0.2823 | 23000 | 1.0743 |
| 0.2885 | 23500 | 1.0605 |
| 0.2946 | 24000 | 1.0602 |
| 0.3007 | 24500 | 1.0732 |
| 0.3069 | 25000 | 1.0614 |
| 0.3130 | 25500 | 1.0666 |
| 0.3192 | 26000 | 1.0669 |
| 0.3253 | 26500 | 1.0627 |
| 0.3314 | 27000 | 1.0659 |
| 0.3376 | 27500 | 1.07 |
| 0.3437 | 28000 | 1.0783 |
| 0.3498 | 28500 | 1.078 |
| 0.3560 | 29000 | 1.0832 |
| 0.3621 | 29500 | 1.0695 |
| 0.3683 | 30000 | 1.0714 |
| 0.3744 | 30500 | 1.3794 |
| 0.3805 | 31000 | 1.0838 |
| 0.3867 | 31500 | 1.0541 |
| 0.3928 | 32000 | 1.0799 |
| 0.3989 | 32500 | 1.0622 |
| 0.4051 | 33000 | 1.0597 |
| 0.4112 | 33500 | 1.0731 |
| 0.4174 | 34000 | 1.0871 |
| 0.4235 | 34500 | 1.0535 |
| 0.4296 | 35000 | 1.3215 |
| 0.4358 | 35500 | 1.1501 |
| 0.4419 | 36000 | 1.1088 |
| 0.4480 | 36500 | 1.0844 |
| 0.4542 | 37000 | 1.0981 |
| 0.4603 | 37500 | 1.0856 |
| 0.4665 | 38000 | 1.0956 |
| 0.4726 | 38500 | 1.0813 |
| 0.4787 | 39000 | 1.0843 |
| 0.4849 | 39500 | 1.1053 |
| 0.4910 | 40000 | 1.092 |
| 0.4971 | 40500 | 1.081 |
| 0.5033 | 41000 | 1.0919 |
| 0.5094 | 41500 | 1.0681 |
| 0.5156 | 42000 | 1.0826 |
| 0.5217 | 42500 | 1.0809 |
| 0.5278 | 43000 | 1.093 |
| 0.5340 | 43500 | 1.0709 |
| 0.5401 | 44000 | 1.0623 |
| 0.5462 | 44500 | 1.0801 |
| 0.5524 | 45000 | 1.0833 |
| 0.5585 | 45500 | 1.0816 |
| 0.5647 | 46000 | 1.0697 |
| 0.5708 | 46500 | 1.0864 |
| 0.5769 | 47000 | 1.0744 |
| 0.5831 | 47500 | 1.0897 |
| 0.5892 | 48000 | 1.0727 |
| 0.5953 | 48500 | 1.0621 |
| 0.6015 | 49000 | 1.0582 |
| 0.6076 | 49500 | 1.0681 |
| 0.6138 | 50000 | 1.083 |
| 0.6199 | 50500 | 1.0632 |
| 0.6260 | 51000 | 1.0809 |
| 0.6322 | 51500 | 1.0525 |
| 0.6383 | 52000 | 1.6649 |
| 0.6444 | 52500 | 1.0873 |
| 0.6506 | 53000 | 1.0649 |
| 0.6567 | 53500 | 1.0591 |
| 0.6629 | 54000 | 1.061 |
| 0.6690 | 54500 | 1.0682 |
| 0.6751 | 55000 | 1.0616 |
| 0.6813 | 55500 | 1.0827 |
| 0.6874 | 56000 | 1.0799 |
| 0.6935 | 56500 | 1.0705 |
| 0.6997 | 57000 | 1.0821 |
| 0.7058 | 57500 | 1.0763 |
| 0.7120 | 58000 | 1.0842 |
| 0.7181 | 58500 | 1.0813 |
| 0.7242 | 59000 | 1.0678 |
| 0.7304 | 59500 | 1.0894 |
| 0.7365 | 60000 | 1.0733 |
| 0.7426 | 60500 | 1.0688 |
| 0.7488 | 61000 | 1.0665 |
| 0.7549 | 61500 | 1.0681 |
| 0.7611 | 62000 | 1.301 |
| 0.7672 | 62500 | 1.0907 |
| 0.7733 | 63000 | 1.3941 |
| 0.7795 | 63500 | 1.1355 |
| 0.7856 | 64000 | 1.2196 |
| 0.7917 | 64500 | 1.225 |
| 0.7979 | 65000 | 1.1437 |
| 0.8040 | 65500 | 1.0787 |
| 0.8102 | 66000 | 1.0686 |
| 0.8163 | 66500 | 1.1017 |
| 0.8224 | 67000 | 1.0999 |
| 0.8286 | 67500 | 1.0771 |
| 0.8347 | 68000 | 1.1015 |
| 0.8408 | 68500 | 1.0826 |
| 0.8470 | 69000 | 1.1046 |
| 0.8531 | 69500 | 1.0735 |
| 0.8593 | 70000 | 1.1056 |
| 0.8654 | 70500 | 1.1077 |
| 0.8715 | 71000 | 1.0897 |
| 0.8777 | 71500 | 1.0775 |
| 0.8838 | 72000 | 1.0907 |
| 0.8899 | 72500 | 1.0705 |
| 0.8961 | 73000 | 1.0776 |
| 0.9022 | 73500 | 1.0896 |
| 0.9084 | 74000 | 1.0889 |
| 0.9145 | 74500 | 1.0804 |
| 0.9206 | 75000 | 1.1087 |
| 0.9268 | 75500 | 1.0738 |
| 0.9329 | 76000 | 1.0806 |
| 0.9390 | 76500 | 1.0899 |
| 0.9452 | 77000 | 1.0814 |
| 0.9513 | 77500 | 1.0723 |
| 0.9575 | 78000 | 1.0923 |
| 0.9636 | 78500 | 1.0748 |
| 0.9697 | 79000 | 1.0745 |
| 0.9759 | 79500 | 1.081 |
| 0.9820 | 80000 | 1.08 |
| 0.9881 | 80500 | 1.0905 |
| 0.9943 | 81000 | 1.1064 |
| 1.0004 | 81500 | 1.0929 |
| 1.0066 | 82000 | 1.0815 |
</details>
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.3.1
- Transformers: 4.43.0
- PyTorch: 2.5.0+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"PCR"
] | Non_BioNLP |
anuccikpmg/multilingual-e5-large-instruct | anuccikpmg | feature-extraction | [
"sentence-transformers",
"onnx",
"xlm-roberta",
"feature-extraction",
"mteb",
"transformers",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2402.05672",
"arxiv:2401.00368",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,738 | 1,738 | 11 | 0 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- sentence-transformers
- transformers
model-index:
- name: multilingual-e5-large-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.23880597014924
- type: ap
value: 39.07351965022687
- type: f1
value: 70.04836733862683
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.71306209850107
- type: ap
value: 79.01499914759529
- type: f1
value: 64.81951817560703
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.85307346326837
- type: ap
value: 22.447519885878737
- type: f1
value: 61.0162730745633
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.04925053533191
- type: ap
value: 23.44983217128922
- type: f1
value: 62.5723230907759
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.28742500000001
- type: ap
value: 94.8449918887462
- type: f1
value: 96.28680923610432
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 56.716
- type: f1
value: 55.76510398266401
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 52.99999999999999
- type: f1
value: 52.00829994765178
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.806000000000004
- type: f1
value: 48.082345914983634
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.507999999999996
- type: f1
value: 47.68752844642045
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.709999999999994
- type: f1
value: 47.05870376637181
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.662000000000006
- type: f1
value: 43.42371965372771
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.721
- type: map_at_10
value: 49.221
- type: map_at_100
value: 49.884
- type: map_at_1000
value: 49.888
- type: map_at_3
value: 44.31
- type: map_at_5
value: 47.276
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 49.5
- type: mrr_at_100
value: 50.163000000000004
- type: mrr_at_1000
value: 50.166
- type: mrr_at_3
value: 44.618
- type: mrr_at_5
value: 47.541
- type: ndcg_at_1
value: 31.721
- type: ndcg_at_10
value: 58.384
- type: ndcg_at_100
value: 61.111000000000004
- type: ndcg_at_1000
value: 61.187999999999995
- type: ndcg_at_3
value: 48.386
- type: ndcg_at_5
value: 53.708999999999996
- type: precision_at_1
value: 31.721
- type: precision_at_10
value: 8.741
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.609
- type: recall_at_1
value: 31.721
- type: recall_at_10
value: 87.411
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.044
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.40419580759799
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.48593255007969
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.889179122289995
- type: mrr
value: 77.61146286769556
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.15075203727929
- type: cos_sim_spearman
value: 86.9622224570873
- type: euclidean_pearson
value: 86.70473853624121
- type: euclidean_spearman
value: 86.9622224570873
- type: manhattan_pearson
value: 86.21089380980065
- type: manhattan_spearman
value: 86.75318154937008
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.65553235908142
- type: f1
value: 99.60681976339595
- type: precision
value: 99.58246346555325
- type: recall
value: 99.65553235908142
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26260180497468
- type: f1
value: 99.14520507740848
- type: precision
value: 99.08650671362535
- type: recall
value: 99.26260180497468
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.07412538967787
- type: f1
value: 97.86629719431936
- type: precision
value: 97.76238309664012
- type: recall
value: 98.07412538967787
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.42074776197998
- type: f1
value: 99.38564156573635
- type: precision
value: 99.36808846761454
- type: recall
value: 99.42074776197998
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.73376623376623
- type: f1
value: 85.68480707214599
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.935218072113855
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.276389017675264
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.764166666666668
- type: map_at_10
value: 37.298166666666674
- type: map_at_100
value: 38.530166666666666
- type: map_at_1000
value: 38.64416666666667
- type: map_at_3
value: 34.484833333333334
- type: map_at_5
value: 36.0385
- type: mrr_at_1
value: 32.93558333333333
- type: mrr_at_10
value: 41.589749999999995
- type: mrr_at_100
value: 42.425333333333334
- type: mrr_at_1000
value: 42.476333333333336
- type: mrr_at_3
value: 39.26825
- type: mrr_at_5
value: 40.567083333333336
- type: ndcg_at_1
value: 32.93558333333333
- type: ndcg_at_10
value: 42.706583333333334
- type: ndcg_at_100
value: 47.82483333333333
- type: ndcg_at_1000
value: 49.95733333333334
- type: ndcg_at_3
value: 38.064750000000004
- type: ndcg_at_5
value: 40.18158333333333
- type: precision_at_1
value: 32.93558333333333
- type: precision_at_10
value: 7.459833333333334
- type: precision_at_100
value: 1.1830833333333335
- type: precision_at_1000
value: 0.15608333333333332
- type: precision_at_3
value: 17.5235
- type: precision_at_5
value: 12.349833333333333
- type: recall_at_1
value: 27.764166666666668
- type: recall_at_10
value: 54.31775
- type: recall_at_100
value: 76.74350000000001
- type: recall_at_1000
value: 91.45208333333332
- type: recall_at_3
value: 41.23425
- type: recall_at_5
value: 46.73983333333334
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.969
- type: map_at_10
value: 21.584999999999997
- type: map_at_100
value: 23.3
- type: map_at_1000
value: 23.5
- type: map_at_3
value: 18.218999999999998
- type: map_at_5
value: 19.983
- type: mrr_at_1
value: 29.316
- type: mrr_at_10
value: 40.033
- type: mrr_at_100
value: 40.96
- type: mrr_at_1000
value: 41.001
- type: mrr_at_3
value: 37.123
- type: mrr_at_5
value: 38.757999999999996
- type: ndcg_at_1
value: 29.316
- type: ndcg_at_10
value: 29.858
- type: ndcg_at_100
value: 36.756
- type: ndcg_at_1000
value: 40.245999999999995
- type: ndcg_at_3
value: 24.822
- type: ndcg_at_5
value: 26.565
- type: precision_at_1
value: 29.316
- type: precision_at_10
value: 9.186
- type: precision_at_100
value: 1.6549999999999998
- type: precision_at_1000
value: 0.22999999999999998
- type: precision_at_3
value: 18.436
- type: precision_at_5
value: 13.876
- type: recall_at_1
value: 12.969
- type: recall_at_10
value: 35.142
- type: recall_at_100
value: 59.143
- type: recall_at_1000
value: 78.594
- type: recall_at_3
value: 22.604
- type: recall_at_5
value: 27.883000000000003
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.527999999999999
- type: map_at_10
value: 17.974999999999998
- type: map_at_100
value: 25.665
- type: map_at_1000
value: 27.406000000000002
- type: map_at_3
value: 13.017999999999999
- type: map_at_5
value: 15.137
- type: mrr_at_1
value: 62.5
- type: mrr_at_10
value: 71.891
- type: mrr_at_100
value: 72.294
- type: mrr_at_1000
value: 72.296
- type: mrr_at_3
value: 69.958
- type: mrr_at_5
value: 71.121
- type: ndcg_at_1
value: 50.875
- type: ndcg_at_10
value: 38.36
- type: ndcg_at_100
value: 44.235
- type: ndcg_at_1000
value: 52.154
- type: ndcg_at_3
value: 43.008
- type: ndcg_at_5
value: 40.083999999999996
- type: precision_at_1
value: 62.5
- type: precision_at_10
value: 30.0
- type: precision_at_100
value: 10.038
- type: precision_at_1000
value: 2.0869999999999997
- type: precision_at_3
value: 46.833000000000006
- type: precision_at_5
value: 38.800000000000004
- type: recall_at_1
value: 8.527999999999999
- type: recall_at_10
value: 23.828
- type: recall_at_100
value: 52.322
- type: recall_at_1000
value: 77.143
- type: recall_at_3
value: 14.136000000000001
- type: recall_at_5
value: 17.761
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.51
- type: f1
value: 47.632159862049896
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.734
- type: map_at_10
value: 72.442
- type: map_at_100
value: 72.735
- type: map_at_1000
value: 72.75
- type: map_at_3
value: 70.41199999999999
- type: map_at_5
value: 71.80499999999999
- type: mrr_at_1
value: 65.212
- type: mrr_at_10
value: 76.613
- type: mrr_at_100
value: 76.79899999999999
- type: mrr_at_1000
value: 76.801
- type: mrr_at_3
value: 74.8
- type: mrr_at_5
value: 76.12400000000001
- type: ndcg_at_1
value: 65.212
- type: ndcg_at_10
value: 77.988
- type: ndcg_at_100
value: 79.167
- type: ndcg_at_1000
value: 79.452
- type: ndcg_at_3
value: 74.362
- type: ndcg_at_5
value: 76.666
- type: precision_at_1
value: 65.212
- type: precision_at_10
value: 10.003
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 29.518
- type: precision_at_5
value: 19.016
- type: recall_at_1
value: 60.734
- type: recall_at_10
value: 90.824
- type: recall_at_100
value: 95.71600000000001
- type: recall_at_1000
value: 97.577
- type: recall_at_3
value: 81.243
- type: recall_at_5
value: 86.90299999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.845
- type: map_at_10
value: 39.281
- type: map_at_100
value: 41.422
- type: map_at_1000
value: 41.593
- type: map_at_3
value: 34.467
- type: map_at_5
value: 37.017
- type: mrr_at_1
value: 47.531
- type: mrr_at_10
value: 56.204
- type: mrr_at_100
value: 56.928999999999995
- type: mrr_at_1000
value: 56.962999999999994
- type: mrr_at_3
value: 54.115
- type: mrr_at_5
value: 55.373000000000005
- type: ndcg_at_1
value: 47.531
- type: ndcg_at_10
value: 47.711999999999996
- type: ndcg_at_100
value: 54.510999999999996
- type: ndcg_at_1000
value: 57.103
- type: ndcg_at_3
value: 44.145
- type: ndcg_at_5
value: 45.032
- type: precision_at_1
value: 47.531
- type: precision_at_10
value: 13.194
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.249
- type: precision_at_3
value: 29.424
- type: precision_at_5
value: 21.451
- type: recall_at_1
value: 23.845
- type: recall_at_10
value: 54.967
- type: recall_at_100
value: 79.11399999999999
- type: recall_at_1000
value: 94.56700000000001
- type: recall_at_3
value: 40.256
- type: recall_at_5
value: 46.215
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.819
- type: map_at_10
value: 60.889
- type: map_at_100
value: 61.717999999999996
- type: map_at_1000
value: 61.778
- type: map_at_3
value: 57.254000000000005
- type: map_at_5
value: 59.541
- type: mrr_at_1
value: 75.638
- type: mrr_at_10
value: 82.173
- type: mrr_at_100
value: 82.362
- type: mrr_at_1000
value: 82.37
- type: mrr_at_3
value: 81.089
- type: mrr_at_5
value: 81.827
- type: ndcg_at_1
value: 75.638
- type: ndcg_at_10
value: 69.317
- type: ndcg_at_100
value: 72.221
- type: ndcg_at_1000
value: 73.382
- type: ndcg_at_3
value: 64.14
- type: ndcg_at_5
value: 67.07600000000001
- type: precision_at_1
value: 75.638
- type: precision_at_10
value: 14.704999999999998
- type: precision_at_100
value: 1.698
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 41.394999999999996
- type: precision_at_5
value: 27.162999999999997
- type: recall_at_1
value: 37.819
- type: recall_at_10
value: 73.52499999999999
- type: recall_at_100
value: 84.875
- type: recall_at_1000
value: 92.559
- type: recall_at_3
value: 62.092999999999996
- type: recall_at_5
value: 67.907
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.60079999999999
- type: ap
value: 92.67396345347356
- type: f1
value: 94.5988098167121
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.285
- type: map_at_10
value: 33.436
- type: map_at_100
value: 34.63
- type: map_at_1000
value: 34.681
- type: map_at_3
value: 29.412
- type: map_at_5
value: 31.715
- type: mrr_at_1
value: 21.848
- type: mrr_at_10
value: 33.979
- type: mrr_at_100
value: 35.118
- type: mrr_at_1000
value: 35.162
- type: mrr_at_3
value: 30.036
- type: mrr_at_5
value: 32.298
- type: ndcg_at_1
value: 21.862000000000002
- type: ndcg_at_10
value: 40.43
- type: ndcg_at_100
value: 46.17
- type: ndcg_at_1000
value: 47.412
- type: ndcg_at_3
value: 32.221
- type: ndcg_at_5
value: 36.332
- type: precision_at_1
value: 21.862000000000002
- type: precision_at_10
value: 6.491
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.744
- type: precision_at_5
value: 10.331999999999999
- type: recall_at_1
value: 21.285
- type: recall_at_10
value: 62.083
- type: recall_at_100
value: 88.576
- type: recall_at_1000
value: 98.006
- type: recall_at_3
value: 39.729
- type: recall_at_5
value: 49.608000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.92612859097127
- type: f1
value: 93.82370333372853
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.67681036911807
- type: f1
value: 92.14191382411472
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.26817878585723
- type: f1
value: 91.92824250337878
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.96554963983714
- type: f1
value: 90.02859329630792
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.02509860164935
- type: f1
value: 89.30665159182062
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.55515370705244
- type: f1
value: 87.94449232331907
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 82.4623803009576
- type: f1
value: 66.06738378772725
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.3716539870386
- type: f1
value: 60.37614033396853
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.34022681787857
- type: f1
value: 58.302008026952
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.72095208268087
- type: f1
value: 59.64524724009049
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.87020437432773
- type: f1
value: 57.80202694670567
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.73598553345387
- type: f1
value: 58.19628250675031
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.6630800268998
- type: f1
value: 65.00996668051691
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.7128446536651
- type: f1
value: 57.95860594874963
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.61129791526563
- type: f1
value: 59.75328290206483
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.00134498991257
- type: f1
value: 67.0230483991802
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.54604628946976
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.032952252858095
- type: f1
value: 58.715741857057104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.80901143241427
- type: f1
value: 68.33963989243877
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.47141896435777
- type: f1
value: 69.56765020308262
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.2373907195696
- type: f1
value: 69.04529836036467
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.05783456624076
- type: f1
value: 74.69430584708174
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.82111634162744
- type: f1
value: 70.77228952803762
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.25353059852051
- type: f1
value: 71.05310103416411
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.28648285137861
- type: f1
value: 69.08020473732226
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.31540013449899
- type: f1
value: 70.9426355465791
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.2151983860121
- type: f1
value: 67.52541755908858
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.58372562205784
- type: f1
value: 69.49769064229827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.9233355749832
- type: f1
value: 69.36311548259593
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.07330195023538
- type: f1
value: 64.99882022345572
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.62273032952253
- type: f1
value: 70.6394885471001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.77000672494957
- type: f1
value: 62.9368944815065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.453261600538
- type: f1
value: 70.85069934666681
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6906523201076
- type: f1
value: 72.03249740074217
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.03631472763953
- type: f1
value: 59.3165215571852
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.913920645595155
- type: f1
value: 57.367337711611285
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.42837928715535
- type: f1
value: 52.60527294970906
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33490248823135
- type: f1
value: 63.213340969404065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.58507061197041
- type: f1
value: 68.40256628040486
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.11230665770006
- type: f1
value: 66.44863577842305
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.70073974445192
- type: f1
value: 67.21291337273702
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.43913920645595
- type: f1
value: 64.09838087422806
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.80026899798251
- type: f1
value: 68.76986742962444
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.78816408876934
- type: f1
value: 62.18781873428972
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.6577000672495
- type: f1
value: 68.75171511133003
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.42501681237391
- type: f1
value: 71.18434963451544
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.64828513786146
- type: f1
value: 70.67741914007422
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.62811028917284
- type: f1
value: 71.36402039740959
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.88634835238736
- type: f1
value: 69.23701923480677
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.15938130464022
- type: f1
value: 71.87792218993388
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.96301277740416
- type: f1
value: 67.29584200202983
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.49562878278412
- type: f1
value: 66.91716685679431
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6805648957633
- type: f1
value: 72.02723592594374
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.00605245460659
- type: f1
value: 60.16716669482932
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.90988567585742
- type: f1
value: 63.99405488777784
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.62273032952253
- type: f1
value: 65.17213906909481
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.50907868190988
- type: f1
value: 69.15165697194853
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.30733019502352
- type: f1
value: 66.69024007380474
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.24277067921989
- type: f1
value: 68.80515408492947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.49831876260929
- type: f1
value: 64.83778567111116
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.28782784129119
- type: f1
value: 69.3294186700733
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.315400134499
- type: f1
value: 71.22674385243207
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.37794216543377
- type: f1
value: 68.96962492838232
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.33557498318764
- type: f1
value: 72.28949738478356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.84398117014123
- type: f1
value: 64.71026362091463
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.76462676529925
- type: f1
value: 69.8229667407667
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.02420981842636
- type: f1
value: 71.76576384895898
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.7572293207801
- type: f1
value: 72.76840765295256
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.02286482851379
- type: f1
value: 66.17237947327872
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.60928043039678
- type: f1
value: 77.27094731234773
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.68325487558843
- type: f1
value: 77.97530399082261
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.13315400134498
- type: f1
value: 75.97558584796424
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.47410894418292
- type: f1
value: 80.52244841473792
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.9670477471419
- type: f1
value: 77.37318805793146
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.09683927370544
- type: f1
value: 77.69773737430847
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.20847343644922
- type: f1
value: 75.17071738727348
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07464694014796
- type: f1
value: 77.16136207698571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.53396099529255
- type: f1
value: 73.58296404484122
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.75319435104237
- type: f1
value: 75.24674707850833
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.0948217888366
- type: f1
value: 76.47559490205028
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.07599193006052
- type: f1
value: 70.76028043093511
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.10490921318089
- type: f1
value: 77.01215275283272
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.25756556825824
- type: f1
value: 70.20605314648762
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 77.3899269057439
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.35440484196369
- type: f1
value: 79.58964690002772
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.42299932750504
- type: f1
value: 68.07844356925413
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.15669132481507
- type: f1
value: 65.89383352608513
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.11432414256894
- type: f1
value: 57.69910594559806
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.24747814391392
- type: f1
value: 70.42455553830918
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46267652992603
- type: f1
value: 76.8854559308316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.24815063887021
- type: f1
value: 72.77805034658074
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11566913248151
- type: f1
value: 73.86147988001356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.0168123739072
- type: f1
value: 69.38515920054571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.41156691324814
- type: f1
value: 73.43474953408237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.39609952925353
- type: f1
value: 67.29731681109291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.20914593140552
- type: f1
value: 77.07066497935367
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.52387357094821
- type: f1
value: 78.5259569473291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.6913248150639
- type: f1
value: 76.91201656350455
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.1217215870881
- type: f1
value: 77.41179937912504
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.25891055817083
- type: f1
value: 75.8089244542887
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.70679219905851
- type: f1
value: 78.21459594517711
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.83523873570948
- type: f1
value: 74.86847028401978
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.71755211835911
- type: f1
value: 74.0214326485662
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.06523201075991
- type: f1
value: 79.10545620325138
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.91862811028918
- type: f1
value: 66.50386121217983
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.93140551445865
- type: f1
value: 70.755435928495
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.40753194351042
- type: f1
value: 71.61816115782923
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.1815736381977
- type: f1
value: 75.08016717887205
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.86482851378614
- type: f1
value: 72.39521180006291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46940147948891
- type: f1
value: 76.70044085362349
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195024
- type: f1
value: 71.5721825332298
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.7511768661735
- type: f1
value: 75.17918654541515
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69535978480162
- type: f1
value: 78.90019070153316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.45729657027572
- type: f1
value: 76.19578371794672
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.92715354123554
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 35.53536244162518
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.08507884504006
- type: mrr
value: 34.32436977159129
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.935
- type: map_at_10
value: 13.297
- type: map_at_100
value: 16.907
- type: map_at_1000
value: 18.391
- type: map_at_3
value: 9.626999999999999
- type: map_at_5
value: 11.190999999999999
- type: mrr_at_1
value: 46.129999999999995
- type: mrr_at_10
value: 54.346000000000004
- type: mrr_at_100
value: 55.067
- type: mrr_at_1000
value: 55.1
- type: mrr_at_3
value: 51.961
- type: mrr_at_5
value: 53.246
- type: ndcg_at_1
value: 44.118
- type: ndcg_at_10
value: 35.534
- type: ndcg_at_100
value: 32.946999999999996
- type: ndcg_at_1000
value: 41.599000000000004
- type: ndcg_at_3
value: 40.25
- type: ndcg_at_5
value: 37.978
- type: precision_at_1
value: 46.129999999999995
- type: precision_at_10
value: 26.842
- type: precision_at_100
value: 8.427
- type: precision_at_1000
value: 2.128
- type: precision_at_3
value: 37.977
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.935
- type: recall_at_10
value: 17.211000000000002
- type: recall_at_100
value: 34.33
- type: recall_at_1000
value: 65.551
- type: recall_at_3
value: 10.483
- type: recall_at_5
value: 13.078999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.231
- type: map_at_10
value: 50.202000000000005
- type: map_at_100
value: 51.154999999999994
- type: map_at_1000
value: 51.181
- type: map_at_3
value: 45.774
- type: map_at_5
value: 48.522
- type: mrr_at_1
value: 39.687
- type: mrr_at_10
value: 52.88
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.58500000000001
- type: mrr_at_3
value: 49.228
- type: mrr_at_5
value: 51.525
- type: ndcg_at_1
value: 39.687
- type: ndcg_at_10
value: 57.754000000000005
- type: ndcg_at_100
value: 61.597
- type: ndcg_at_1000
value: 62.18900000000001
- type: ndcg_at_3
value: 49.55
- type: ndcg_at_5
value: 54.11899999999999
- type: precision_at_1
value: 39.687
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 22.229
- type: precision_at_5
value: 15.939
- type: recall_at_1
value: 35.231
- type: recall_at_10
value: 78.083
- type: recall_at_100
value: 94.42099999999999
- type: recall_at_1000
value: 98.81
- type: recall_at_3
value: 57.047000000000004
- type: recall_at_5
value: 67.637
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.241
- type: map_at_10
value: 85.462
- type: map_at_100
value: 86.083
- type: map_at_1000
value: 86.09700000000001
- type: map_at_3
value: 82.49499999999999
- type: map_at_5
value: 84.392
- type: mrr_at_1
value: 82.09
- type: mrr_at_10
value: 88.301
- type: mrr_at_100
value: 88.383
- type: mrr_at_1000
value: 88.384
- type: mrr_at_3
value: 87.37
- type: mrr_at_5
value: 88.035
- type: ndcg_at_1
value: 82.12
- type: ndcg_at_10
value: 89.149
- type: ndcg_at_100
value: 90.235
- type: ndcg_at_1000
value: 90.307
- type: ndcg_at_3
value: 86.37599999999999
- type: ndcg_at_5
value: 87.964
- type: precision_at_1
value: 82.12
- type: precision_at_10
value: 13.56
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.88
- type: precision_at_5
value: 24.92
- type: recall_at_1
value: 71.241
- type: recall_at_10
value: 96.128
- type: recall_at_100
value: 99.696
- type: recall_at_1000
value: 99.994
- type: recall_at_3
value: 88.181
- type: recall_at_5
value: 92.694
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.59757799655151
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.27391998854624
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.243
- type: map_at_10
value: 10.965
- type: map_at_100
value: 12.934999999999999
- type: map_at_1000
value: 13.256
- type: map_at_3
value: 7.907
- type: map_at_5
value: 9.435
- type: mrr_at_1
value: 20.9
- type: mrr_at_10
value: 31.849
- type: mrr_at_100
value: 32.964
- type: mrr_at_1000
value: 33.024
- type: mrr_at_3
value: 28.517
- type: mrr_at_5
value: 30.381999999999998
- type: ndcg_at_1
value: 20.9
- type: ndcg_at_10
value: 18.723
- type: ndcg_at_100
value: 26.384999999999998
- type: ndcg_at_1000
value: 32.114
- type: ndcg_at_3
value: 17.753
- type: ndcg_at_5
value: 15.558
- type: precision_at_1
value: 20.9
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 2.078
- type: precision_at_1000
value: 0.345
- type: precision_at_3
value: 16.900000000000002
- type: precision_at_5
value: 13.88
- type: recall_at_1
value: 4.243
- type: recall_at_10
value: 19.885
- type: recall_at_100
value: 42.17
- type: recall_at_1000
value: 70.12
- type: recall_at_3
value: 10.288
- type: recall_at_5
value: 14.072000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.84209174935282
- type: cos_sim_spearman
value: 81.73248048438833
- type: euclidean_pearson
value: 83.02810070308149
- type: euclidean_spearman
value: 81.73248295679514
- type: manhattan_pearson
value: 82.95368060376002
- type: manhattan_spearman
value: 81.60277910998718
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 88.52628804556943
- type: cos_sim_spearman
value: 82.5713913555672
- type: euclidean_pearson
value: 85.8796774746988
- type: euclidean_spearman
value: 82.57137506803424
- type: manhattan_pearson
value: 85.79671002960058
- type: manhattan_spearman
value: 82.49445981618027
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 86.23682503505542
- type: cos_sim_spearman
value: 87.15008956711806
- type: euclidean_pearson
value: 86.79805401524959
- type: euclidean_spearman
value: 87.15008956711806
- type: manhattan_pearson
value: 86.65298502699244
- type: manhattan_spearman
value: 86.97677821948562
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.63370304677802
- type: cos_sim_spearman
value: 84.97105553540318
- type: euclidean_pearson
value: 85.28896108687721
- type: euclidean_spearman
value: 84.97105553540318
- type: manhattan_pearson
value: 85.09663190337331
- type: manhattan_spearman
value: 84.79126831644619
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 90.2614838800733
- type: cos_sim_spearman
value: 91.0509162991835
- type: euclidean_pearson
value: 90.33098317533373
- type: euclidean_spearman
value: 91.05091625871644
- type: manhattan_pearson
value: 90.26250435151107
- type: manhattan_spearman
value: 90.97999594417519
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.80480973335091
- type: cos_sim_spearman
value: 87.313695492969
- type: euclidean_pearson
value: 86.49267251576939
- type: euclidean_spearman
value: 87.313695492969
- type: manhattan_pearson
value: 86.44019901831935
- type: manhattan_spearman
value: 87.24205395460392
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.05662789380672
- type: cos_sim_spearman
value: 90.02759424426651
- type: euclidean_pearson
value: 90.4042483422981
- type: euclidean_spearman
value: 90.02759424426651
- type: manhattan_pearson
value: 90.51446975000226
- type: manhattan_spearman
value: 90.08832889933616
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.5975528273532
- type: cos_sim_spearman
value: 67.62969861411354
- type: euclidean_pearson
value: 69.224275734323
- type: euclidean_spearman
value: 67.62969861411354
- type: manhattan_pearson
value: 69.3761447059927
- type: manhattan_spearman
value: 67.90921005611467
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.11244327231684
- type: cos_sim_spearman
value: 88.37902438979035
- type: euclidean_pearson
value: 87.86054279847336
- type: euclidean_spearman
value: 88.37902438979035
- type: manhattan_pearson
value: 87.77257757320378
- type: manhattan_spearman
value: 88.25208966098123
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.87174608143563
- type: mrr
value: 96.12836872640794
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.258
- type: map_at_100
value: 67.757
- type: map_at_1000
value: 67.78800000000001
- type: map_at_3
value: 64.602
- type: map_at_5
value: 65.64
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.441
- type: mrr_at_100
value: 68.825
- type: mrr_at_1000
value: 68.853
- type: mrr_at_3
value: 66.444
- type: mrr_at_5
value: 67.26100000000001
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.852
- type: ndcg_at_100
value: 73.9
- type: ndcg_at_1000
value: 74.628
- type: ndcg_at_3
value: 67.093
- type: ndcg_at_5
value: 68.58
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.111
- type: precision_at_5
value: 16.733
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.967
- type: recall_at_100
value: 93.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.589
- type: recall_at_5
value: 75.483
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.66633663366336
- type: cos_sim_ap
value: 91.17685358899108
- type: cos_sim_f1
value: 82.16818642350559
- type: cos_sim_precision
value: 83.26488706365504
- type: cos_sim_recall
value: 81.10000000000001
- type: dot_accuracy
value: 99.66633663366336
- type: dot_ap
value: 91.17663411119032
- type: dot_f1
value: 82.16818642350559
- type: dot_precision
value: 83.26488706365504
- type: dot_recall
value: 81.10000000000001
- type: euclidean_accuracy
value: 99.66633663366336
- type: euclidean_ap
value: 91.17685189882275
- type: euclidean_f1
value: 82.16818642350559
- type: euclidean_precision
value: 83.26488706365504
- type: euclidean_recall
value: 81.10000000000001
- type: manhattan_accuracy
value: 99.66633663366336
- type: manhattan_ap
value: 91.2241619496737
- type: manhattan_f1
value: 82.20472440944883
- type: manhattan_precision
value: 86.51933701657458
- type: manhattan_recall
value: 78.3
- type: max_accuracy
value: 99.66633663366336
- type: max_ap
value: 91.2241619496737
- type: max_f1
value: 82.20472440944883
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.85101268897951
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 42.461184054706905
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.44542568873886
- type: mrr
value: 52.33656151854681
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.75982974997539
- type: cos_sim_spearman
value: 30.385405026539914
- type: dot_pearson
value: 30.75982433546523
- type: dot_spearman
value: 30.385405026539914
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22799999999999998
- type: map_at_10
value: 2.064
- type: map_at_100
value: 13.056000000000001
- type: map_at_1000
value: 31.747999999999998
- type: map_at_3
value: 0.67
- type: map_at_5
value: 1.097
- type: mrr_at_1
value: 90.0
- type: mrr_at_10
value: 94.667
- type: mrr_at_100
value: 94.667
- type: mrr_at_1000
value: 94.667
- type: mrr_at_3
value: 94.667
- type: mrr_at_5
value: 94.667
- type: ndcg_at_1
value: 86.0
- type: ndcg_at_10
value: 82.0
- type: ndcg_at_100
value: 64.307
- type: ndcg_at_1000
value: 57.023999999999994
- type: ndcg_at_3
value: 85.816
- type: ndcg_at_5
value: 84.904
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 85.8
- type: precision_at_100
value: 66.46
- type: precision_at_1000
value: 25.202
- type: precision_at_3
value: 90.0
- type: precision_at_5
value: 89.2
- type: recall_at_1
value: 0.22799999999999998
- type: recall_at_10
value: 2.235
- type: recall_at_100
value: 16.185
- type: recall_at_1000
value: 53.620999999999995
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.172
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.75
- type: precision
value: 96.45
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.54913294797689
- type: f1
value: 82.46628131021194
- type: precision
value: 81.1175337186898
- type: recall
value: 85.54913294797689
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.21951219512195
- type: f1
value: 77.33333333333334
- type: precision
value: 75.54878048780488
- type: recall
value: 81.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.26666666666665
- type: precision
value: 98.1
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.5
- type: f1
value: 99.33333333333333
- type: precision
value: 99.25
- type: recall
value: 99.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.2
- type: precision
value: 96.89999999999999
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.18333333333334
- type: precision
value: 96.88333333333333
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.61194029850746
- type: f1
value: 72.81094527363183
- type: precision
value: 70.83333333333333
- type: recall
value: 77.61194029850746
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.91666666666667
- type: precision
value: 91.08333333333334
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.29268292682927
- type: f1
value: 85.27642276422765
- type: precision
value: 84.01277584204414
- type: recall
value: 88.29268292682927
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.0
- type: precision
value: 94.46666666666668
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.681652490887
- type: f1
value: 91.90765492102065
- type: precision
value: 91.05913325232888
- type: recall
value: 93.681652490887
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.17391304347827
- type: f1
value: 89.97101449275361
- type: precision
value: 88.96811594202899
- type: recall
value: 92.17391304347827
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.43478260869566
- type: f1
value: 87.72173913043478
- type: precision
value: 86.42028985507245
- type: recall
value: 90.43478260869566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.03
- type: precision
value: 86.95
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.4
- type: f1
value: 91.45666666666666
- type: precision
value: 90.525
- type: recall
value: 93.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.9059107358263
- type: f1
value: 78.32557872364869
- type: precision
value: 76.78260286824823
- type: recall
value: 81.9059107358263
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.58333333333333
- type: precision
value: 91.73333333333332
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.10000000000001
- type: f1
value: 74.50500000000001
- type: precision
value: 72.58928571428571
- type: recall
value: 79.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.55
- type: precision
value: 95.05
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.0952380952381
- type: f1
value: 77.98458049886621
- type: precision
value: 76.1968253968254
- type: recall
value: 82.0952380952381
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.9
- type: f1
value: 84.99190476190476
- type: precision
value: 83.65
- type: recall
value: 87.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.56666666666666
- type: precision
value: 94.01666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.2
- type: precision
value: 98.0
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.38333333333334
- type: precision
value: 93.78333333333335
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.4
- type: f1
value: 84.10380952380952
- type: precision
value: 82.67
- type: recall
value: 87.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.33333333333334
- type: precision
value: 93.78333333333333
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.4
- type: f1
value: 86.82000000000001
- type: precision
value: 85.64500000000001
- type: recall
value: 89.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.1
- type: f1
value: 93.56666666666668
- type: precision
value: 92.81666666666666
- type: recall
value: 95.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.9
- type: f1
value: 98.6
- type: precision
value: 98.45
- type: recall
value: 98.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.01347708894879
- type: f1
value: 93.51752021563343
- type: precision
value: 92.82794249775381
- type: recall
value: 95.01347708894879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.00854700854701
- type: f1
value: 96.08262108262107
- type: precision
value: 95.65527065527067
- type: recall
value: 97.00854700854701
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5
- type: f1
value: 95.39999999999999
- type: precision
value: 94.88333333333333
- type: recall
value: 96.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5909090909091
- type: f1
value: 95.49242424242425
- type: precision
value: 94.9621212121212
- type: recall
value: 96.5909090909091
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.90566037735849
- type: f1
value: 81.85883997204752
- type: precision
value: 80.54507337526205
- type: recall
value: 84.90566037735849
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5
- type: f1
value: 96.75
- type: precision
value: 96.38333333333333
- type: recall
value: 97.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7704280155642
- type: f1
value: 82.99610894941635
- type: precision
value: 81.32295719844358
- type: recall
value: 86.7704280155642
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.52136752136752
- type: f1
value: 61.89662189662191
- type: precision
value: 59.68660968660969
- type: recall
value: 67.52136752136752
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 86.32
- type: precision
value: 85.015
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.0
- type: f1
value: 94.78333333333333
- type: precision
value: 94.18333333333334
- type: recall
value: 96.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.8785046728972
- type: f1
value: 80.54517133956385
- type: precision
value: 79.154984423676
- type: recall
value: 83.8785046728972
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.01333333333334
- type: precision
value: 91.28333333333333
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.1
- type: f1
value: 96.26666666666667
- type: precision
value: 95.85000000000001
- type: recall
value: 97.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.3
- type: f1
value: 80.67833333333333
- type: precision
value: 79.03928571428571
- type: recall
value: 84.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.3
- type: f1
value: 96.48333333333332
- type: precision
value: 96.08333333333331
- type: recall
value: 97.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.66666666666667
- type: precision
value: 94.16666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.36666666666667
- type: precision
value: 95.96666666666668
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.80666666666667
- type: precision
value: 92.12833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.22333333333334
- type: precision
value: 95.875
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.33333333333333
- type: f1
value: 70.78174603174602
- type: precision
value: 69.28333333333332
- type: recall
value: 74.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.6
- type: f1
value: 32.938348952090365
- type: precision
value: 31.2811038961039
- type: recall
value: 37.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.5
- type: f1
value: 89.13333333333333
- type: precision
value: 88.03333333333333
- type: recall
value: 91.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.14285714285714
- type: f1
value: 77.67857142857143
- type: precision
value: 75.59523809523809
- type: recall
value: 82.14285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.0450054884742
- type: f1
value: 63.070409283362075
- type: precision
value: 60.58992781824835
- type: recall
value: 69.0450054884742
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.1
- type: f1
value: 57.848333333333336
- type: precision
value: 55.69500000000001
- type: recall
value: 63.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.01666666666667
- type: precision
value: 94.5
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.90666666666667
- type: precision
value: 94.425
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.61333333333333
- type: precision
value: 83.27
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.4
- type: f1
value: 71.90746031746032
- type: precision
value: 70.07027777777778
- type: recall
value: 76.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.26666666666667
- type: precision
value: 96.95
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.8
- type: f1
value: 74.39555555555555
- type: precision
value: 72.59416666666667
- type: recall
value: 78.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.78999999999999
- type: precision
value: 93.125
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.75
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.25666666666666
- type: precision
value: 93.64166666666668
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.934306569343065
- type: f1
value: 51.461591936044485
- type: precision
value: 49.37434827945776
- type: recall
value: 56.934306569343065
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.200000000000003
- type: f1
value: 16.91799284049284
- type: precision
value: 15.791855158730158
- type: recall
value: 20.200000000000003
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.2
- type: f1
value: 95.3
- type: precision
value: 94.85
- type: recall
value: 96.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.3
- type: f1
value: 95.11666666666667
- type: precision
value: 94.53333333333333
- type: recall
value: 96.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.88095238095238
- type: f1
value: 87.14285714285714
- type: precision
value: 85.96230158730161
- type: recall
value: 89.88095238095238
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 24.099999999999998
- type: f1
value: 19.630969083349783
- type: precision
value: 18.275094905094907
- type: recall
value: 24.099999999999998
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.4368530020704
- type: f1
value: 79.45183870649709
- type: precision
value: 77.7432712215321
- type: recall
value: 83.4368530020704
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.53333333333333
- type: precision
value: 93.91666666666666
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.8
- type: f1
value: 98.48333333333332
- type: precision
value: 98.33333333333334
- type: recall
value: 98.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.5
- type: f1
value: 14.979285714285714
- type: precision
value: 14.23235060690943
- type: recall
value: 17.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.93939393939394
- type: f1
value: 91.991341991342
- type: precision
value: 91.05339105339105
- type: recall
value: 93.93939393939394
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.31297709923665
- type: f1
value: 86.76844783715012
- type: precision
value: 85.63613231552164
- type: recall
value: 89.31297709923665
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.12663755458514
- type: f1
value: 98.93255701115964
- type: precision
value: 98.83551673944687
- type: recall
value: 99.12663755458514
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.0
- type: f1
value: 89.77999999999999
- type: precision
value: 88.78333333333333
- type: recall
value: 92.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.89265536723164
- type: f1
value: 95.85687382297553
- type: precision
value: 95.33898305084746
- type: recall
value: 96.89265536723164
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.6
- type: f1
value: 11.820611790170615
- type: precision
value: 11.022616224355355
- type: recall
value: 14.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.93333333333334
- type: precision
value: 94.48666666666666
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.72333333333334
- type: precision
value: 83.44166666666666
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.47333333333333
- type: precision
value: 92.875
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.71666666666665
- type: precision
value: 95.28333333333335
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.8
- type: f1
value: 14.511074040901628
- type: precision
value: 13.503791000666002
- type: recall
value: 17.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.10187667560321
- type: f1
value: 92.46648793565683
- type: precision
value: 91.71134941912423
- type: recall
value: 94.10187667560321
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.11666666666666
- type: precision
value: 95.68333333333334
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.72727272727273
- type: f1
value: 66.58949745906267
- type: precision
value: 63.86693017127799
- type: recall
value: 72.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.14084507042254
- type: f1
value: 88.26291079812206
- type: precision
value: 87.32394366197182
- type: recall
value: 90.14084507042254
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.67065868263472
- type: f1
value: 58.2876627696987
- type: precision
value: 55.79255774165953
- type: recall
value: 64.67065868263472
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.41666666666667
- type: precision
value: 93.85
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.172413793103445
- type: f1
value: 49.63992493549144
- type: precision
value: 47.71405113769646
- type: recall
value: 55.172413793103445
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.4417616811983
- type: precision
value: 71.91607981220658
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.61538461538461
- type: f1
value: 80.91452991452994
- type: precision
value: 79.33760683760683
- type: recall
value: 84.61538461538461
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2
- type: f1
value: 97.6
- type: precision
value: 97.3
- type: recall
value: 98.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.5741127348643
- type: f1
value: 72.00417536534445
- type: precision
value: 70.53467872883321
- type: recall
value: 75.5741127348643
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 62.2
- type: f1
value: 55.577460317460314
- type: precision
value: 52.98583333333333
- type: recall
value: 62.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.18241042345277
- type: f1
value: 90.6468124709167
- type: precision
value: 89.95656894679696
- type: recall
value: 92.18241042345277
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.13333333333333
- type: precision
value: 94.66666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.85000000000001
- type: precision
value: 95.39999999999999
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.1259842519685
- type: f1
value: 89.76377952755905
- type: precision
value: 88.71391076115485
- type: recall
value: 92.1259842519685
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.49
- type: precision
value: 91.725
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.5623268698061
- type: f1
value: 73.27364463791058
- type: precision
value: 71.51947852086357
- type: recall
value: 77.5623268698061
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.56666666666666
- type: precision
value: 96.16666666666667
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34615384615384
- type: f1
value: 61.092032967032964
- type: precision
value: 59.27197802197802
- type: recall
value: 66.34615384615384
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.41190476190476
- type: precision
value: 92.7
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.10000000000001
- type: precision
value: 90.13333333333333
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.97333333333334
- type: precision
value: 91.14166666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.21698113207547
- type: f1
value: 90.3796046720575
- type: precision
value: 89.56367924528303
- type: recall
value: 92.21698113207547
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.6
- type: f1
value: 96.91666666666667
- type: precision
value: 96.6
- type: recall
value: 97.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.44525547445255
- type: f1
value: 96.71532846715328
- type: precision
value: 96.35036496350365
- type: recall
value: 97.44525547445255
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.34000000000002
- type: precision
value: 91.49166666666667
- type: recall
value: 94.1
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.2910000000000004
- type: map_at_10
value: 10.373000000000001
- type: map_at_100
value: 15.612
- type: map_at_1000
value: 17.06
- type: map_at_3
value: 6.119
- type: map_at_5
value: 7.917000000000001
- type: mrr_at_1
value: 44.897999999999996
- type: mrr_at_10
value: 56.054
- type: mrr_at_100
value: 56.82000000000001
- type: mrr_at_1000
value: 56.82000000000001
- type: mrr_at_3
value: 52.381
- type: mrr_at_5
value: 53.81
- type: ndcg_at_1
value: 42.857
- type: ndcg_at_10
value: 27.249000000000002
- type: ndcg_at_100
value: 36.529
- type: ndcg_at_1000
value: 48.136
- type: ndcg_at_3
value: 33.938
- type: ndcg_at_5
value: 29.951
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 22.653000000000002
- type: precision_at_100
value: 7.000000000000001
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 27.755000000000003
- type: recall_at_1
value: 3.2910000000000004
- type: recall_at_10
value: 16.16
- type: recall_at_100
value: 43.908
- type: recall_at_1000
value: 79.823
- type: recall_at_3
value: 7.156
- type: recall_at_5
value: 10.204
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.05879999999999
- type: ap
value: 14.609748142799111
- type: f1
value: 54.878956295843096
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.61799660441426
- type: f1
value: 64.8698191961434
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.32860036611885
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.34714192048638
- type: cos_sim_ap
value: 80.26732975975634
- type: cos_sim_f1
value: 73.53415148134374
- type: cos_sim_precision
value: 69.34767360299276
- type: cos_sim_recall
value: 78.25857519788919
- type: dot_accuracy
value: 88.34714192048638
- type: dot_ap
value: 80.26733698491206
- type: dot_f1
value: 73.53415148134374
- type: dot_precision
value: 69.34767360299276
- type: dot_recall
value: 78.25857519788919
- type: euclidean_accuracy
value: 88.34714192048638
- type: euclidean_ap
value: 80.26734337771738
- type: euclidean_f1
value: 73.53415148134374
- type: euclidean_precision
value: 69.34767360299276
- type: euclidean_recall
value: 78.25857519788919
- type: manhattan_accuracy
value: 88.30541813196639
- type: manhattan_ap
value: 80.19415808104145
- type: manhattan_f1
value: 73.55143870713441
- type: manhattan_precision
value: 73.25307511122743
- type: manhattan_recall
value: 73.85224274406332
- type: max_accuracy
value: 88.34714192048638
- type: max_ap
value: 80.26734337771738
- type: max_f1
value: 73.55143870713441
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.81061047075717
- type: cos_sim_ap
value: 87.11747055081017
- type: cos_sim_f1
value: 80.04355498817256
- type: cos_sim_precision
value: 78.1165262000733
- type: cos_sim_recall
value: 82.06806282722513
- type: dot_accuracy
value: 89.81061047075717
- type: dot_ap
value: 87.11746902745236
- type: dot_f1
value: 80.04355498817256
- type: dot_precision
value: 78.1165262000733
- type: dot_recall
value: 82.06806282722513
- type: euclidean_accuracy
value: 89.81061047075717
- type: euclidean_ap
value: 87.11746919324248
- type: euclidean_f1
value: 80.04355498817256
- type: euclidean_precision
value: 78.1165262000733
- type: euclidean_recall
value: 82.06806282722513
- type: manhattan_accuracy
value: 89.79508673885202
- type: manhattan_ap
value: 87.11074390832218
- type: manhattan_f1
value: 80.13002540726349
- type: manhattan_precision
value: 77.83826945412311
- type: manhattan_recall
value: 82.56082537727133
- type: max_accuracy
value: 89.81061047075717
- type: max_ap
value: 87.11747055081017
- type: max_f1
value: 80.13002540726349
---
## Multilingual-E5-large-instruct
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 24 layers and the embedding size is 1024.
## Usage
Below are examples to encode queries and passages from the MS-MARCO passage ranking dataset.
### Transformers
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, '南瓜的家常做法')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-large-instruct')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-large-instruct')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# => [[91.92852783203125, 67.580322265625], [70.3814468383789, 92.1330795288086]]
```
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, '南瓜的家常做法')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
input_texts = queries + documents
model = SentenceTransformer('intfloat/multilingual-e5-large-instruct')
embeddings = model.encode(input_texts, convert_to_tensor=True, normalize_embeddings=True)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
# [[91.92853546142578, 67.5802993774414], [70.38143157958984, 92.13307189941406]]
```
## Supported Languages
This model is initialized from [xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
**Initialization**: [xlm-roberta-large](https://huggingface.co/xlm-roberta-large)
**First stage**: contrastive pre-training with 1 billion weakly supervised text pairs.
**Second stage**: fine-tuning on datasets from the [E5-mistral](https://arxiv.org/abs/2401.00368) paper.
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## FAQ
**1. Do I need to add instructions to the query?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
The task definition should be a one-sentence instruction that describes the task.
This is a way to customize text embeddings for different scenarios through natural language instructions.
Please check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for instructions we used for evaluation.
On the other hand, there is no need to add instructions to the document side.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens.
| [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-v0-8bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,713 | 1,713 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-deduped-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-2.8B-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-2.8B-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-2.8B-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] | Non_BioNLP |
xmanii/maux-gte-persian-v2 | xmanii | sentence-similarity | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10000",
"loss:CosineSimilarityLoss",
"custom_code",
"dataset:xmanii/maux-gte-10k-public",
"arxiv:1908.10084",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:finetune:Alibaba-NLP/gte-multilingual-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,735 | 1,735 | 3,130 | 3 | ---
base_model: Alibaba-NLP/gte-multilingual-base
datasets:
- xmanii/maux-gte-10k-public
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10000
- loss:CosineSimilarityLoss
widget:
- source_sentence: چگونه ساختار خانواده بر توسعه اجتماعی تأثیر میگذارد؟
sentences:
- انتخابهای سبک زندگی مانند سیگار کشیدن، رژیم غذایی نامناسب و عدم فعالیت بدنی میتوانند
به شرایط مختلف قلبی منجر شوند.
- ساختار خانواده میتواند به طور قابل توجهی بر توسعه اجتماعی کودک تأثیر بگذارد با
ارائه سطوح مختلف حمایت عاطفی، فرصتهای اجتماعی و الگوهای رفتاری.
- صنعت فیلم به خاطر کمبود تنوع و نمایندگی مورد انتقاد قرار گرفته است.
- source_sentence: عوامل اصلی که بر تورم تأثیر میگذارند چیستند؟
sentences:
- تورم میتواند موضوع چالشبرانگیزی برای سیاستگذاران باشد.
- اکوسیستمهای بیابانی با بارش کم و دماهای بالا مشخص میشوند، که شرایط زندگی چالشبرانگیزی
برای گیاهان و جانوران ایجاد میکند.
- امتیازهای Z در توزیعهای نرمال استاندارد استفاده میشوند، در حالی که امتیازهای
t زمانی استفاده میشوند که اندازه نمونه کوچک باشد و انحراف معیار جمعیت نامشخص
باشد.
- source_sentence: آنتیبیوتیکها چگونه در سطح سلولی کار میکنند؟
sentences:
- برخی از گیاهان گوشتخوار، مانند تله ونیس، دارای حرکات سریع برای بهداماندازی
طعمههای خود هستند.
- آنتیبیوتیکها نوعی دارو هستند که میتوانند توسط پزشکان برای درمان عفونتها تجویز
شوند.
- نرخ تورم میتواند با استفاده از شاخصهای مختلفی اندازهگیری شود، مانند شاخص قیمت
مصرفکننده (CPI) و شاخص قیمت تولیدکننده (PPI).
- source_sentence: چگونه سری کتابهای «هری پاتر» ج.ک. رولینگ بر ادبیات مدرن تأثیر
گذاشته است؟
sentences:
- جی.کی. رولینگ کتابهای دیگری تحت نام مستعار رابرت گالبریت نوشته است که رمانهای
جنایی هستند.
- رنگ آکریلیک به طور معمول در هنر مدرن استفاده میشود، در حالی که رنگ روغن قرنهاست
که در هنر کلاسیک به کار میرود.
- ماهیهای اعماق دریا دارای سازگاریهایی مانند بیولومینسانس، بدنهای مقاوم به فشار
و مکانیزمهای تغذیه خاص هستند تا در شرایط شدید sobrevivir کنند.
- source_sentence: تفاوت بین کشاورزی ارگانیک و کشاورزی سنتی چیست؟
sentences:
- در حالی که بازههای اطمینان مفید هستند، در صورت عدم رعایت فرضیات زیرین، ممکن است
به اشتباه تفسیر شوند.
- تاریخ حفظ آب به تمدنهای باستانی برمیگردد که سیستمهای آبیاری را توسعه دادند.
- بازارهای کشاورزان مکانهای محبوبی برای خرید محصولات ارگانیک به طور مستقیم از کشاورزان
محلی هستند.
model-index:
- name: SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.9487949766869277
name: Pearson Cosine
- type: spearman_cosine
value: 0.947885967258665
name: Spearman Cosine
---
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) on the [maux-gte-10k-public](https://huggingface.co/datasets/xmanii/maux-gte-10k-public) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision ade1467d6266ae07e6f74aae34d56bf3b8acf3f7 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [maux-gte-10k-public](https://huggingface.co/datasets/xmanii/maux-gte-10k-public)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("xmanii/maux-gte-persian-v2")
# Run inference
sentences = [
'تفاوت بین کشاورزی ارگانیک و کشاورزی سنتی چیست؟',
'بازارهای کشاورزان مکان\u200cهای محبوبی برای خرید محصولات ارگانیک به طور مستقیم از کشاورزان محلی هستند.',
'تاریخ حفظ آب به تمدن\u200cهای باستانی برمی\u200cگردد که سیستم\u200cهای آبیاری را توسعه دادند.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.9488 |
| **spearman_cosine** | **0.9479** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### maux-gte-10k-public
* Dataset: [maux-gte-10k-public](https://huggingface.co/datasets/xmanii/maux-gte-10k-public) at [e20c689](https://huggingface.co/datasets/xmanii/maux-gte-10k-public/tree/e20c689e4915c4689dd54dd621ff57d5704cfaa5)
* Size: 10,000 training samples
* Columns: <code>persian_question</code>, <code>persian_answer</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | persian_question | persian_answer | score |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.7 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.01 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 0.02</li><li>mean: 0.52</li><li>max: 1.0</li></ul> |
* Samples:
| persian_question | persian_answer | score |
|:-----------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>آیا میتوانید فرآیند برنامهریزی مسیر ربات را توضیح دهید؟</code> | <code>رباتها میتوانند برنامهنویسی شوند تا مجموعهای از وظایف را انجام دهند، از اقدامهای تکراری ساده تا فرآیندهای پیچیده تصمیمگیری.</code> | <code>0.27999999999999997</code> |
| <code>آیا انسانها میتوانند در مریخ زندگی کنند؟</code> | <code>مریخ چهارمین سیاره از خورشید است و به دلیل ظاهر سرخش اغلب به سیاره سرخ معروف است.</code> | <code>0.16</code> |
| <code>عناصر کلیدی ترکیب در هنر انتزاعی چیست؟</code> | <code>تاریخ هنر انتزاعی به اوایل قرن بیستم برمیگردد، با پیشگامانی مانند واسیلی کاندینسکی و پیت موندریان.</code> | <code>0.36</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### maux-gte-10k-public
* Dataset: [maux-gte-10k-public](https://huggingface.co/datasets/xmanii/maux-gte-10k-public) at [e20c689](https://huggingface.co/datasets/xmanii/maux-gte-10k-public/tree/e20c689e4915c4689dd54dd621ff57d5704cfaa5)
* Size: 10,000 evaluation samples
* Columns: <code>persian_question</code>, <code>persian_answer</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | persian_question | persian_answer | score |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.7 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.01 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 0.02</li><li>mean: 0.52</li><li>max: 1.0</li></ul> |
* Samples:
| persian_question | persian_answer | score |
|:-----------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------|
| <code>آیا میتوانید فرآیند برنامهریزی مسیر ربات را توضیح دهید؟</code> | <code>رباتها میتوانند برنامهنویسی شوند تا مجموعهای از وظایف را انجام دهند، از اقدامهای تکراری ساده تا فرآیندهای پیچیده تصمیمگیری.</code> | <code>0.27999999999999997</code> |
| <code>آیا انسانها میتوانند در مریخ زندگی کنند؟</code> | <code>مریخ چهارمین سیاره از خورشید است و به دلیل ظاهر سرخش اغلب به سیاره سرخ معروف است.</code> | <code>0.16</code> |
| <code>عناصر کلیدی ترکیب در هنر انتزاعی چیست؟</code> | <code>تاریخ هنر انتزاعی به اوایل قرن بیستم برمیگردد، با پیشگامانی مانند واسیلی کاندینسکی و پیت موندریان.</code> | <code>0.36</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | spearman_cosine |
|:----------:|:--------:|:-------------:|:---------------:|:---------------:|
| 0.1597 | 50 | 0.0663 | - | - |
| 0.3195 | 100 | 0.0409 | 0.0298 | 0.7983 |
| 0.4792 | 150 | 0.0342 | - | - |
| 0.6390 | 200 | 0.0294 | 0.0230 | 0.8464 |
| 0.7987 | 250 | 0.0296 | - | - |
| 0.9585 | 300 | 0.0298 | 0.0220 | 0.8610 |
| 1.1182 | 350 | 0.0249 | - | - |
| 1.2780 | 400 | 0.0237 | 0.0230 | 0.8745 |
| 1.4377 | 450 | 0.0241 | - | - |
| 1.5974 | 500 | 0.0218 | 0.0166 | 0.8900 |
| 1.7572 | 550 | 0.0227 | - | - |
| 1.9169 | 600 | 0.0231 | 0.0148 | 0.9045 |
| 2.0767 | 650 | 0.0196 | - | - |
| 2.2364 | 700 | 0.0173 | 0.0131 | 0.9179 |
| 2.3962 | 750 | 0.0172 | - | - |
| 2.5559 | 800 | 0.0172 | 0.0119 | 0.9231 |
| 2.7157 | 850 | 0.0167 | - | - |
| 2.8754 | 900 | 0.0172 | 0.0120 | 0.9291 |
| 3.0351 | 950 | 0.0175 | - | - |
| 3.1949 | 1000 | 0.013 | 0.0100 | 0.9362 |
| 3.3546 | 1050 | 0.0128 | - | - |
| 3.5144 | 1100 | 0.0129 | 0.0101 | 0.9390 |
| 3.6741 | 1150 | 0.0134 | - | - |
| 3.8339 | 1200 | 0.0137 | 0.0095 | 0.9430 |
| 3.9936 | 1250 | 0.0133 | - | - |
| 4.1534 | 1300 | 0.0109 | 0.0096 | 0.9449 |
| 4.3131 | 1350 | 0.0114 | - | - |
| **4.4728** | **1400** | **0.0111** | **0.0083** | **0.9479** |
| 4.6326 | 1450 | 0.0107 | - | - |
| 4.7923 | 1500 | 0.0122 | 0.0085 | 0.9479 |
| 4.9521 | 1550 | 0.0112 | - | - |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.8
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | [
"CPI"
] | Non_BioNLP |
zirui3/alpaca-med-lora-7b | zirui3 | null | [
"arxiv:2302.13971",
"region:us"
] | 1,680 | 1,680 | 0 | 0 | ---
{}
---
# Model summary
* instruction-tuning on medical data based on LLaMA
# data
* Common
* alpaca-5.2k
* unatural-instruct 80k
* OIG-40M
* Chinese
* english/chinese translation data
* zhihu QA
* pCLUE
* Medical Domain:
* MedDialog-200k
* Chinese-medical-dialogue-data
* WebMedQA
* code
* alpaca_code-20k
# training
## Model
* LLaMA-7B
## Hardware
* 6 x A100 40G using NVLink 4 inter-gpu connects
## Software
* tokenizers==0.12.1
* sentencepiece==0.1.97
* transformers==4.28
* torch==2.0.0+cu117
# How to use
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
from peft import PeftModel
base_model="llma-7b"
LORA_WEIGHTS = "llma-med-alpaca-7b"
LOAD_8BIT = False
tokenizer = LlamaTokenizer.from_pretrained(base_model)
model = LlamaForCausalLM.from_pretrained(
base_model
load_in_8bit=LOAD_8BIT,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(
model,
LORA_WEIGHTS,
torch_dtype=torch.float16,
)
config = {
"temperature": 0 ,
"max_new_tokens": 1024,
"top_p": 0.5
}
prompt = "Translate to English: Je t’aime."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
outputs = model.generate(input_ids=input_ids, max_new_tokens=config["max_new_tokens"], temperature=config["temperature"])
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
print(decoded[len(prompt):])
```
# Limitations
* This model may output harmful, biased, toxic, and illusory things, and currently does not undergo RLHF training, so this model is only for research purposes
# TODO
- [x] self-instruct data
- [x] english medical data
- [ ] code data
- [ ] chinese corpus/medical dialog data
# Reference
* [LLaMA: Open and Efficient Foundation Language Models](https://arxiv.org/abs/2302.13971)
* [Alpaca: A strong open-source instruction-following model](https://crfm.stanford.edu/2023/03/13/alpaca.html) | [
"TRANSLATION"
] | [
"MEDDIALOG",
"MEDICAL DATA"
] | BioNLP |
RichardErkhov/ricepaper_-_vi-gemma-2b-RAG-4bits | RichardErkhov | null | [
"safetensors",
"gemma",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,729 | 1,729 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vi-gemma-2b-RAG - bnb 4bits
- Model creator: https://huggingface.co/ricepaper/
- Original model: https://huggingface.co/ricepaper/vi-gemma-2b-RAG/
Original model description:
---
base_model: unsloth/gemma-1.1-2b-it-bnb-4bit
language:
- en
- vi
license: apache-2.0
tags:
- text-generation-inference
- retrieval-augmented-generation
- transformers
- unsloth
- gemma
- trl
- sft
---
## Model Card: vi-gemma-2b-RAG
### (English below)
### Tiếng Việt (Vietnamese)
**Mô tả mô hình:**
vi-gemma-2b-RAG là một mô hình ngôn ngữ lớn được tinh chỉnh từ mô hình cơ sở [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) sử dụng kỹ thuật LoRA. Mô hình được huấn luyện trên tập dữ liệu tiếng Việt với mục tiêu cải thiện khả năng xử lý ngôn ngữ tiếng Việt và nâng cao hiệu suất cho các tác vụ truy xuất thông tin mở (Retrieval Augmented Generation - RAG).
**Mục đích sử dụng:**
Mô hình vi-gemma-2b-RAG phù hợp cho các tác vụ sau:
* Trả lời câu hỏi dựa trên ngữ cảnh tiếng Việt.
* Tóm tắt văn bản tiếng Việt.
* Dịch máy tiếng Việt.
* Và các tác vụ tạo văn bản tiếng Việt khác.
**Giới hạn:**
Mặc dù đã được tinh chỉnh cho tiếng Việt, vi-gemma-2b-RAG vẫn có thể gặp phải một số hạn chế:
* Có thể tạo ra thông tin sai lệch hoặc không chính xác.
* Có thể thể hiện thành kiến hoặc quan điểm không phù hợp.
* Hiệu suất có thể bị ảnh hưởng bởi chất lượng của dữ liệu đầu vào.
**Cách sử dụng:**
Dưới đây chúng tôi chia sẻ một số đoạn mã về cách bắt đầu nhanh chóng để sử dụng mô hình. Trước tiên, hãy đảm bảo đã cài đặt `pip install -U transformers`, sau đó sao chép đoạn mã từ phần có liên quan đến usecase của bạn.
Chúng tôi khuyến nghị sử dụng `torch.bfloat16` làm mặc định.
```python
# pip install transformers torch accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Khởi tạo tokenizer và model từ checkpoint đã lưu
tokenizer = AutoTokenizer.from_pretrained("himmeow/vi-gemma-2b-RAG")
model = AutoModelForCausalLM.from_pretrained(
"himmeow/vi-gemma-2b-RAG",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Sử dụng GPU nếu có
if torch.cuda.is_available():
model.to("cuda")
# Định dạng prompt cho model
prompt = """
### Instruction and Input:
Dựa vào ngữ cảnh/tài liệu sau:
{}
Hãy trả lời câu hỏi: {}
### Response:
{}
"""
# Chuẩn bị dữ liệu đầu vào
input_data = """
Short Tandem Repeats (STRs) là các trình tự DNA lặp lại ngắn (2- 6 nucleotides) xuất hiện phổ biến trong hệ gen của con người. Các trình tự này có tính đa hình rất cao trong tự nhiên, điều này khiến các STRs trở thành những markers di truyền rất quan trọng trong nghiên cứu bản đồ gen người và chuẩn đoán bệnh lý di truyền cũng như xác định danh tính trong lĩnh vực pháp y.
Các STRs trở nên phổ biến tại các phòng xét nghiệm pháp y bởi vì việc nhân bản và phân tích STRs chỉ cần lượng DNA rất thấp ngay cả khi ở dạng bị phân hủy việc đinh danh vẫn có thể được thực hiện thành công. Hơn nữa việc phát hiện và đánh giá sự nhiễm DNA mẫu trong các mẫu vật có thể được giải quyết nhanh với kết quả phân tích STRs. Ở Hoa Kỳ hiện nay, từ bộ 13 markers nay đã tăng lên 20 markers chính đang được sử dụng để tạo ra một cơ sở dữ liệu DNA trên toàn đất nước được gọi là The FBI Combined DNA Index System (Expaned CODIS).
CODIS và các cơ sử dữ liệu DNA tương tự đang được sử dụng thực sự thành công trong việc liên kết các hồ sơ DNA từ các tội phạm và các bằng chứng hiện trường vụ án. Kết quả định danh STRs cũng được sử dụng để hỗ trợ hàng trăm nghìn trường hợp xét nghiệm huyết thống cha con mỗi năm'
"""
query = "Hãy cho tôi biết một số tính chất của STRs được dùng để làm gì?"
# Định dạng input text
input_text = prompt.format(input_data, query," ")
# Mã hóa input text thành input ids
input_ids = tokenizer(input_text, return_tensors="pt")
# Sử dụng GPU cho input ids nếu có
if torch.cuda.is_available():
input_ids = input_ids.to("cuda")
# Tạo văn bản bằng model
outputs = model.generate(
**input_ids,
max_new_tokens=500,
no_repeat_ngram_size=5, # Ngăn chặn lặp lại các cụm từ 5 gram
# do_sample=True, # Kích hoạt chế độ tạo văn bản dựa trên lấy mẫu. Trong chế độ này, model sẽ chọn ngẫu nhiên token tiếp theo dựa trên xác suất được tính từ phân phối xác suất của các token.
# temperature=0.7, # Giảm temperature để kiểm soát tính ngẫu nhiên
# early_stopping=True, # Dừng tạo văn bản khi tìm thấy kết thúc phù hợp
)
# Giải mã và in kết quả
print(tokenizer.decode(outputs[0]))
'''
<bos>
### Instruction and Input:
Dựa vào ngữ cảnh/tài liệu sau:
Short Tandem Repeats (STRs) là các trình tự DNA lặp lại ngắn (2- 6 nucleotides) xuất hiện phổ biến trong hệ gen của con người. Các trình tự này có tính đa hình rất cao trong tự nhiên, điều này khiến các STRs trở thành những markers di truyền rất quan trọng trong nghiên cứu bản đồ gen người và chuẩn đoán bệnh lý di truyền cũng như xác định danh tính trong lĩnh vực pháp y.
Các STRs trở nên phổ biến tại các phòng xét nghiệm pháp y bởi vì việc nhân bản và phân tích STRs chỉ cần lượng DNA rất thấp ngay cả khi ở dạng bị phân hủy việc đinh danh vẫn có thể được thực hiện thành công. Hơn nữa việc phát hiện và đánh giá sự nhiễm DNA mẫu trong các mẫu vật có thể được giải quyết nhanh với kết quả phân tích STRs. Ở Hoa Kỳ hiện nay, từ bộ 13 markers nay đã tăng lên 20 markers chính đang được sử dụng để tạo ra một cơ sở dữ liệu DNA trên toàn đất nước được gọi là The FBI Combined DNA Index System (Expaned CODIS).
CODIS và các cơ sử dữ liệu DNA tương tự đang được sử dụng thực sự thành công trong việc liên kết các hồ sơ DNA từ các tội phạm và các bằng chứng hiện trường vụ án. Kết quả định danh STRs cũng được sử dụng để hỗ trợ hàng trăm nghìn trường hợp xét nghiệm huyết thống cha con mỗi năm'
Hãy trả lời câu hỏi: Hãy cho tôi biết một số tính chất của STRs được dùng để làm gì?
### Response:
STRs được sử dụng để xác định danh tính, chuẩn đoán bệnh lý và xác định bệnh lý di truyền.
<eos>
'''
```
**Huấn luyện:**
* **Mô hình cơ sở:** google/gemma-1.1-2b-it
* **Tập dữ liệu:** lamhieu/mabrycodes_dialogue_vi
* **Phương pháp tinh chỉnh:** LoRA, PEFT với Unsloth
## Model Card: vi-gemma-2b-RAG
### English
**Model Description:**
vi-gemma-2b-RAG is a large language model fine-tuned from the base model [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) using LoRA. The model is trained on a Vietnamese dataset to improve its Vietnamese language processing capabilities and enhance its performance for Retrieval Augmented Generation (RAG) tasks.
**Intended Use:**
The vi-gemma-2b-RAG model is suitable for tasks such as:
* Vietnamese question answering.
* Vietnamese text summarization.
* Vietnamese machine translation.
* And other Vietnamese text generation tasks.
**Limitations:**
While fine-tuned for Vietnamese, vi-gemma-2b-RAG may still have some limitations:
* It may generate incorrect or misleading information.
* It may exhibit biases or inappropriate opinions.
* Its performance may be affected by the quality of the input data.
**How to Use:**
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
We recommend `torch.bfloat16` as the default dtype.
```python
# pip install transformers torch accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Initialize the tokenizer and model from the saved checkpoint
tokenizer = AutoTokenizer.from_pretrained("himmeow/vi-gemma-2b-RAG")
model = AutoModelForCausalLM.from_pretrained(
"himmeow/vi-gemma-2b-RAG",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Use GPU if available
if torch.cuda.is_available():
model.to("cuda")
# Define the prompt format for the model
prompt = """
### Instruction and Input:
Based on the following context/document:
{}
Please answer the question: {}
### Response:
{}
"""
# Prepare the input data
input_data = """
Short Tandem Repeats (STRs) are short (2-6 nucleotides) repeating DNA sequences that are widespread in the human genome. These sequences are highly polymorphic in nature, which makes STRs very important genetic markers in human gene mapping and diagnosis of hereditary diseases as well as identification in the field of forensics.
STRs have become popular in forensic laboratories because the replication and analysis of STRs requires very small amounts of DNA, even in decomposed form, identification can still be performed successfully. Furthermore, the detection and assessment of sample DNA contamination in specimens can be quickly resolved with STR analysis results. In the United States today, the set of 13 markers has now been increased to 20 main markers being used to create a nationwide DNA database called The FBI Combined DNA Index System (Expaned CODIS).
CODIS and similar DNA databases are being used very successfully in linking DNA records from criminals and crime scene evidence. STR identification results are also used to support hundreds of thousands of paternity test cases each year.'
"""
query = "Tell me what are some properties of STRs used for?"
# Format the input text
input_text = prompt.format(input_data, query," ")
# Encode the input text into input ids
input_ids = tokenizer(input_text, return_tensors="pt")
# Use GPU for input ids if available
if torch.cuda.is_available():
input_ids = input_ids.to("cuda")
# Generate text using the model
outputs = model.generate(
**input_ids,
max_new_tokens=500, # Limit the number of tokens generated
no_repeat_ngram_size=5, # Prevent repetition of 5-gram phrases
# do_sample=True,
# temperature=0.7, # Adjust the randomness of the generated text
# early_stopping=True, # Stop generating text when a suitable ending is found
)
# Decode and print the results
print(tokenizer.decode(outputs[0]))
```
**Training:**
* **Base Model:** google/gemma-1.1-2b-it
* **Dataset:** lamhieu/mabrycodes_dialogue_vi
* **Fine-tuning Method:** LoRA, PEFT and Unsloth
**Using example repository:** https://github.com/Martincrux/Vietnamese-RAG-system-building-with-vi-gemma-2b-RAG-and-halong_embedding
# Uploaded model
- **Developed by:** [hiieu](https://huggingface.co/hiieu), [himmeow the coder](https://huggingface.co/himmeow), [cuctrinh](https://www.linkedin.com/in/trinh-cuc-5722832b6)
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-1.1-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] | Non_BioNLP |
RichardErkhov/BSC-LT_-_salamandra-2b-8bits | RichardErkhov | null | [
"safetensors",
"llama",
"arxiv:2403.14009",
"arxiv:2403.20266",
"arxiv:2101.00027",
"arxiv:2207.00220",
"arxiv:1810.06694",
"arxiv:1911.05507",
"arxiv:1906.03741",
"arxiv:2406.17557",
"arxiv:2402.06619",
"arxiv:1803.09010",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,729 | 1,729 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
salamandra-2b - bnb 8bits
- Model creator: https://huggingface.co/BSC-LT/
- Original model: https://huggingface.co/BSC-LT/salamandra-2b/
Original model description:
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
language:
- bg
- ca
- code
- cs
- cy
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ga
- gl
- hr
- hu
- it
- lt
- lv
- mt
- nl
- nn
- \no
- oc
- pl
- pt
- ro
- ru
- sh
- sk
- sl
- sr
- sv
- uk
---

# Salamandra Model Card
Salamandra is a highly multilingual model pre-trained from scratch that comes in three different
sizes — 2B, 7B and 40B parameters — with their respective base and instruction-tuned variants.
This model card corresponds to the 7B instructed version.
To visit the model cards of other Salamandra versions, please refer to the [Model Index](#model-index).
The entire Salamandra family is released under a permissive [Apache 2.0 license]((https://www.apache.org/licenses/LICENSE-2.0)).
Along with the open weights, all training scripts and configuration files are made publicly available in [this GitHub repository](https://github.com/langtech-bsc/salamandra).
---
## Model Details
### Description
Transformer-based decoder-only language model that has been pre-trained from scratch on 7.8 trillion tokens of highly curated data.
The pre-training corpus contains text in 35 European languages and code.
### Hyperparameters
The full list of hyperparameters for each model can be found [here](https://github.com/langtech-bsc/salamandra/tree/main/configs).
### Architecture
| | |
|-------------------------|:--------------|
| Total Parameters | 2,253,490,176 |
| Embedding Parameters | 524,288,000 |
| Layers | 24 |
| Hidden size | 2,048 |
| Attention heads | 16 |
| Context length | 8,192 |
| Vocabulary size | 256,000 |
| Precision | bfloat16 |
| Embedding type | RoPE |
| Activation Function | SwiGLU |
| Layer normalization | RMS Norm |
| Flash attention | ✅ |
| Grouped Query Attention | ❌ |
| Num. query groups | N/A |
---
## Intended Use
### Direct Use
The models are intended for both research and commercial use in any of the languages included in the training data.
The base models are intended either for language generation or to be further fine-tuned for specific use-cases.
The instruction-tuned variants can be used as general-purpose assistants, as long as the user is fully aware of the model’s limitations.
### Out-of-scope Use
The model is not intended for malicious activities, such as harming others or violating human rights.
Any downstream application must comply with current laws and regulations.
Irresponsible usage in production environments without proper risk assessment and mitigation is also discouraged.
---
## Hardware and Software
### Training Framework
Pre-training was conducted using NVIDIA’s [NeMo Framework](https://docs.nvidia.com/nemo-framework/index.html),
which leverages PyTorch Lightning for efficient model training in highly distributed settings.
The instruction-tuned versions were produced with [FastChat](https://github.com/lm-sys/FastChat).
### Compute Infrastructure
All models were trained on [MareNostrum 5](https://www.bsc.es/ca/marenostrum/marenostrum-5), a pre-exascale EuroHPC supercomputer hosted and
operated by Barcelona Supercomputing Center.
The accelerated partition is composed of 1,120 nodes with the following specifications:
- 4x Nvidia Hopper GPUs with 64 HBM2 memory
- 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
- 4x NDR200 (BW per node 800Gb/s)
- 512 GB of Main memory (DDR5)
- 460GB on NVMe storage
|Model|Nodes|GPUs|
|:---:|:---:|:---:|
|2B|64|256|
|7B|128|512|
|40B|256 / 512|1,024 / 2,048|
---
## How to use
This section offers examples of how to perform inference using various methods.
### Inference
You'll find different techniques for running inference, including Huggingface's Text Generation Pipeline, multi-GPU configurations, and vLLM for scalable and efficient generation.
#### Inference with Huggingface's Text Generation Pipeline
The Huggingface Text Generation Pipeline provides a straightforward way to run inference using the Salamandra-2b model.
```bash
pip install transformers torch accelerate sentencepiece protobuf
```
<details>
<summary>Show code</summary>
```python
from transformers import pipeline, set_seed
model_id = "BSC-LT/salamandra-2b"
# Sample prompts
prompts = [
"Todo el mundo sabe que vivir en Barcelona es",
"¿Pueblo o ciudad? Una ventaja de vivir en la ciudad es que hay muchas oportunidades de ocio y empleo, así como una gran diversidad de comercios para todos los gustos. Sin embargo, las ciudades suelen ser ",
"Llegir ens proporciona",
"What I find more fascinating about languages is that",
"La vie peut être",
"The future of AI is",
]
# Create the pipeline
generator = pipeline("text-generation", model_id, device_map="auto")
generation_args = {
"temperature": 0.1,
"top_p": 0.95,
"max_new_tokens": 25,
"repetition_penalty": 1.2,
"do_sample": True
}
# Fix the seed
set_seed(1)
# Generate texts
outputs = generator(prompts, **generation_args)
# Print outputs
for output in outputs:
print(output[0]["generated_text"])
```
</details>
#### Inference with single / multi GPU
This section provides a simple example of how to run inference using Huggingface's AutoModel class.
```bash
pip install transformers torch accelerate sentencepiece protobuf
```
<details>
<summary>Show code</summary>
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "BSC-LT/salamandra-2b"
# Input text
text = "El mercat del barri és"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Load the model
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16
)
generation_args = {
"temperature": 0.1,
"top_p": 0.95,
"max_new_tokens": 25,
"repetition_penalty": 1.2,
"do_sample": True
}
inputs = tokenizer(text, return_tensors="pt")
# Generate texts
output = model.generate(input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"], **generation_args)
# Print outputs
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
</details>
#### Inference with vLLM
vLLM is an efficient library for inference that enables faster and more scalable text generation.
```bash
pip install vllm
```
<details>
<summary>Show code</summary>
```python
from vllm import LLM, SamplingParams
model_id = "BSC-LT/salamandra-2b"
# Sample prompts
prompts = [
"Todo el mundo sabe que vivir en Barcelona es",
"¿Pueblo o ciudad? Una ventaja de vivir en la ciudad es que hay muchas oportunidades de ocio y empleo, así como una gran diversidad de comercios para todos los gustos. Sin embargo, las ciudades suelen ser ",
"Llegir ens proporciona",
"What I find more fascinating about languages is that",
"La vie peut être",
"The future of AI is",
]
# Create a sampling params object
sampling_params = SamplingParams(
temperature=0.1,
top_p=0.95,
seed=1,
max_tokens=25,
repetition_penalty=1.2)
# Create an LLM
llm = LLM(model=model_id)
# Generate texts
outputs = llm.generate(prompts, sampling_params)
# Print outputs
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
</details>
---
## Data
### Pretraining Data
The training corpus consists of 2.4 trillion tokens, including 35 European languages and 92 programming languages. It amounts to a total of 33TB of pre-processed text.
Languages were sampled manually by giving x2 oversampling to Spain's co-official languages (Spanish, Catalan, Galician and Basque), code was undersampled by half,
and the rest of the languages were kept as is, resulting in the following distribution:

This highly multilingual corpus is predominantly composed of data from Colossal OSCAR,
which contributes a significant 66.06% of the total tokens.
Following this, Starcoder provides 11.91%, and Spanish Crawling adds 3.34%.
The next largest sources are French FR at 3.12% and Proof Pile at 1.98%.
Other notable contributions include Macocu, Pile of Law, and Eurlex, each contributing around 1.5% to 1.3%.
These major sources collectively form the bulk of the corpus, ensuring a rich and diverse dataset for training the language model.
The remaining 10% comes from smaller sources in various languages.
Feel free to click the expand button below to see the full list of sources.
<details>
<summary>Data Sources</summary>
| Dataset | Language | Source |
|-----------------------------------------------|---------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| Parlamint corpus | at, bg, cz, dk, ee, es, es-ga, fi, fr, gb, gr, hr, hu, it, lv, nl, no, pl, pt, rs, se, si | Erjavec et al., 2021 |
| Bulgarian National Corpus | bg | [Link](http://old.dcl.bas.bg/dataset/BulNC.7z) |
| Crawl of Bulgarian news websites | bg | [Link](http://old.dcl.bas.bg/dataset/Bulgarian_news.7z) |
| Colossal OSCAR 1.0 | bg, ca, cs, cy, da, de, el, en, es, et, eu, fi, fr, ga, gl, hr, hu, it, lt, lv, mt, nl, nn, no, oc, pl, pt, ro, ru, sh, sk, sl, sr, sv, uk | Brack et al., 2024 |
| Wikimedia dumps | bg, ca, cs, da, de, el, en, es, et, eu, fi, fr, ga, gl, hr, hu, it, lt, lv, mt, nl, nn, no, pl, pt, ro, sh, sk, sl, sr, uk | [Link](https://dumps.wikimedia.org/) |
| OpenSubtitlesv2016 | bg, ca, cs, da, de, el, en, es, et, eu, fi, fr, gl, hr, it, lt, lv, nl, no, pl, pt, ro, sk, sl, sr, sv, uk | Lison & Tiedemann, 2016 |
| MaCoCu web corpus | bg, ca, el, hr, mt, sl, sr, uk | Bañón et al., 2022 |
| EurLEX-Resources | bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv | [Link](https://huggingface.co/datasets/joelniklaus/eurlex_resources) |
| MC4-Legal | bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv | [Link](https://huggingface.co/datasets/joelito/legal-mc4) |
| CURLICAT Corpus | bg, hr, hu, pl, ro, sk, sl | Váradi et al., 2022 |
| CATalog | ca | Palomar-Giner et al., 2024 |
| Spanish Crawling | ca, es, eu, gl | Relevant Spanish websites crawling |
| Starcoder | code | Li et al., 2023 |
| SYN v9: large corpus of written Czech | cs | Křen et al., 2021 |
| Welsh-GOV | cy | Crawling from [Link](https://www.llyw.cymru) |
| DaNewsroom | da | Varab & Schluter, 2020 |
| Danish GigaWord | da | Strømberg-Derczynski et al., 2021 |
| DK-CLARIN Reference Corpus of General Danish | da | [Link](https://korpus.dsl.dk/clarin/) |
| The Danish Parliament Corpus 2009 - 2017, v1 | da | Hansen, 2018 |
| DeWaC | de | [Link](https://docs.sslmit.unibo.it/doku.php?id=corpora:dewac) |
| Open Legal Data - German court decisions and laws | de | Ostendorff et al., 2020 |
| Greek Legal Code | el | Papaloukas et al., 2021 |
| Greek Web Corpus | el | Outsios et al., 2018 |
| Auxiliary Mathematics Problems and Solutions (AMPS) dataset | en | Hendrycks et al., 2021 |
| BIGPATENT | en | Sharma et al., 2019 |
| FineWeb-Edu (350BT subset) | en | Penedo et al., 2024 |
| peS2o | en | Soldaini & Lo, 2023 |
| PG-19 | en | Rae et al., 2019 |
| Pile of Law (selected subsets) | en | Henderson* et al., 2022 |
| proof-pile | en | [Link](https://huggingface.co/datasets/hoskinson-center/proof-pile) |
| RedPajama-Data T1 (StackExchange subset) | en | Computer, 2023 |
| The Pile (PhilPapers subset) | en | Gao et al., 2021 |
| Biomedical | es | Internally generated scientific dataset: Dialnet, Scielo, CSIC, TDX, BSC, UCM |
| HPLTDatasets v1 - Spanish | es | de Gibert et al., 2024 |
| Legal | es | Internally generated legal dataset: BOE, BORME, Senado, Congreso, Spanish court orders, DOGC |
| Scientific | es | Internally generated scientific dataset: Wikipedia LS, Pubmed, MeSpEn, patents, clinical cases, medical crawler |
| Spanish Legal Domain Corpora | es | Gutiérrez-Fandiño et al., 2021 |
| Estonian National Corpus 2021 | et | Koppel & Kallas, 2022 |
| Estonian Reference Corpus | et | [Link](https://www.cl.ut.ee/korpused/segakorpus/) |
| EusCrawl (w/o Wikipedia or NC-licenses) | eu | Artetxe et al., 2022 |
| Latxa Corpus v1.1 | eu | Etxaniz et al., 2024 [Link](https://huggingface.co/datasets/HiTZ/latxa-corpus-v1.1) |
| Aya Dataset (w/o Evaluation Suite) | eu, hr, nl, fi, ka, hu, lt, nn, ro, sk, lv, cy, bg, cs, en, fr, de, ga, mt, pl, ru, sl, sv, ca, da, et, gl, el, it, no, pt, sr, es, uk | Singh et al., 2024 |
| Yle Finnish News Archive | fi | [Link](http://urn.fi/urn:nbn:fi:lb-2021050401) |
| CaBeRnet: a New French Balanced Reference Corpus | fr | Popa-Fabre et al., 2020 |
| French Public Domain Books | fr | [Link](https://huggingface.co/datasets/PleIAs/French-PD-Books) |
| French Public Domain Newspapers | fr | [Link](https://huggingface.co/datasets/PleIAs/French-PD-Newspapers) |
| Irish Universal Dependencies | ga | [Link](https://universaldependencies.org/ga/index.html) |
| The Gaois bilingual corpus of English-Irish legislation (Irish legislation) | ga | [Link](https://portulanclarin.net/repository/browse/the-gaois-bilingual-corpus-of-english-irish-legislation-processed/daeac17c9e3511ea9b7f02420a000407b83de243dc0b469aab41084386c5b80f/) |
| CorpusNÓS | gl | de-Dios-Flores et al., 2024 |
| Croatian web corpus hrWaC 2.1 | hr | Ljubešić & Klubička, 2014 |
| ITWaC | it | [Link](https://docs.sslmit.unibo.it/doku.php?id=corpora:itwac) |
| Corpus of State-related content from the Latvian Web (Processed) | lv | [Link](https://catalog.elra.info/en-us/repository/browse/ELRA-W0169/) |
| Korpus Malti | mt | Micallef et al., 2022 |
| SoNaR Corpus NC 1.2 | nl | [Link](https://taalmaterialen.ivdnt.org/download/tstc-sonar-corpus/) |
| Norwegian Colossal Corpus | nn, no | Kummervold et al., 2021 |
| Occitan Corpus | oc | Provided by [IEA](https://www.institutestudisaranesi.cat/) |
| NKJP-PodkorpusMilionowy-1.2 (National Corpus of Polish) | pl | Lewandowska-Tomaszczyk et al., 2013 |
| Polish Parliamentary Corpus / Korpus Dyskursu Parlamentarnego | pl | Ogrodniczuk, 2018 |
| Brazilian Portuguese Web as Corpus | pt | Wagner Filho et al., 2018 |
| ParlamentoPT | pt | Rodrigues et al., 2023 |
| MARCELL Romanian legislative subcorpus v2 | ro | [Link](https://elrc-share.eu/reposMARCELL%20Romanian%20legislative%20subcorpus%20v2itory/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/) |
| Korpus slovenských právnych predpisov v1.9 | sk | [Link](https://www.juls.savba.sk/data/marcell/legal-sk-20220322-1.9.ver.xz) |
| od-justice 2.0 | sk | [Link](https://www.juls.savba.sk/data/od-justice/od-justice-2.0.ver.xz) |
| Corpus of academic Slovene KAS 2.0 | sl | Žagar et al., 2022 |
| slWaC web corpus | sl | Erjavec et al., 2015 |
| SrpKorSubset (news, legal, academic, conversation, literary) | sr | [Link](http://www.korpus.matf.bg.ac.rs/) |
| The Swedish Culturomics Gigaword Corpus | sv | Rødven-Eide, 2016 |
| Corpus of laws and legal acts of Ukraine | uk | [Link](https://lang.org.ua/en/corpora/#anchor7) |
<details>
<summary>References</summary>
- Abadji, J., Suárez, P. J. O., Romary, L., & Sagot, B. (2021). Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus (H. Lüngen, M. Kupietz, P. Bański, A. Barbaresi, S. Clematide, & I. Pisetta, Eds.; pp. 1–9). Leibniz-Institut für Deutsche Sprache. [Link](https://doi.org/10.14618/ids-pub-10468)
- Artetxe, M., Aldabe, I., Agerri, R., Perez-de-Viñaspre, O., & Soroa, A. (2022). Does Corpus Quality Really Matter for Low-Resource Languages?
- Bañón, M., Esplà-Gomis, M., Forcada, M. L., García-Romero, C., Kuzman, T., Ljubešić, N., van Noord, R., Sempere, L. P., Ramírez-Sánchez, G., Rupnik, P., Suchomel, V., Toral, A., van der Werff, T., & Zaragoza, J. (2022). MaCoCu: Massive collection and curation of monolingual and bilingual data: Focus on under-resourced languages. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, 303–304. [Link](https://aclanthology.org/2022.eamt-1.41)
- Brack, M., Ostendorff, M., Suarez, P. O., Saiz, J. J., Castilla, I. L., Palomar-Giner, J., Shvets, A., Schramowski, P., Rehm, G., Villegas, M., & Kersting, K. (2024). Community OSCAR: A Community Effort for Multilingual Web Data. [Link](https://occiglot.eu/papers/Community_Oscar.pdf)
- Computer, T. (2023). RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset [Computer software]. [Link](https://github.com/togethercomputer/RedPajama-Data)
- de Gibert, O., Nail, G., Arefyev, N., Bañón, M., van der Linde, J., Ji, S., Zaragoza-Bernabeu, J., Aulamo, M., Ramírez-Sánchez, G., Kutuzov, A., Pyysalo, S., Oepen, S., & Tiedemann, J. (2024). A New Massive Multilingual Dataset for High-Performance Language Technologies (arXiv:2403.14009). arXiv. [Link](http://arxiv.org/abs/2403.14009)
- Dodge, J., Sap, M., Marasović, A., Agnew, W., Ilharco, G., Groeneveld, D., Mitchell, M., & Gardner, M. (2021). Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus. In M.-F. Moens, X. Huang, L. Specia, & S. W. Yih (Eds.), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (pp. 1286–1305). Association for Computational Linguistics. [Link](https://doi.org/10.18653/v1/2021.emnlp-main.98)
- Erjavec, T., Ljubešić, N., & Logar, N. (2015). The slWaC corpus of the Slovene web. Informatica (Slovenia), 39, 35–42.
- Erjavec, T., Ogrodniczuk, M., Osenova, P., Ljubešić, N., Simov, K., Grigorova, V., Rudolf, M., Pančur, A., Kopp, M., Barkarson, S., Steingrímsson, S. hór, van der Pol, H., Depoorter, G., de Does, J., Jongejan, B., Haltrup Hansen, D., Navarretta, C., Calzada Pérez, M., de Macedo, L. D., … Rayson, P. (2021). Linguistically annotated multilingual comparable corpora of parliamentary debates ParlaMint.ana 2.1. [Link](http://hdl.handle.net/11356/1431)
- Etxaniz, J., Sainz, O., Perez, N., Aldabe, I., Rigau, G., Agirre, E., Ormazabal, A., Artetxe, M., & Soroa, A. (2024). Latxa: An Open Language Model and Evaluation Suite for Basque. [Link] (https://arxiv.org/abs/2403.20266)
- Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., Presser, S., & Leahy, C. (2021). The Pile: An 800GB Dataset of Diverse Text for Language Modeling. CoRR, abs/2101.00027. [Link](https://arxiv.org/abs/2101.00027)
- Gutiérrez-Fandiño, A., Armengol-Estapé, J., Gonzalez-Agirre, A., & Villegas, M. (2021). Spanish Legalese Language Model and Corpora.
- Hansen, D. H. (2018). The Danish Parliament Corpus 2009—2017, v1. [Link](http://hdl.handle.net/20.500.12115/8)
- Henderson*, P., Krass*, M. S., Zheng, L., Guha, N., Manning, C. D., Jurafsky, D., & Ho, D. E. (2022). Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset. arXiv. [Link](https://arxiv.org/abs/2207.00220)
- Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., & Steinhardt, J. (2021). Measuring Mathematical Problem Solving With the MATH Dataset. NeurIPS.
- Jansen, T., Tong, Y., Zevallos, V., & Suarez, P. O. (2022). Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data.
- Koppel, K., & Kallas, J. (2022). Eesti keele ühendkorpuste sari 2013–2021: Mahukaim eestikeelsete digitekstide kogu. Eesti Rakenduslingvistika Ühingu Aastaraamat Estonian Papers in Applied Linguistics, 18, 207–228. [Link](https://doi.org/10.5128/erya18.12)
- Křen, M., Cvrček, V., Henyš, J., Hnátková, M., Jelínek, T., Kocek, J., Kováříková, D., Křivan, J., Milička, J., Petkevič, V., Procházka, P., Skoumalová, H., Šindlerová, J., & Škrabal, M. (2021). SYN v9: Large corpus of written Czech. [Link](http://hdl.handle.net/11234/1-4635)
- Kreutzer, J., Caswell, I., Wang, L., Wahab, A., van Esch, D., Ulzii-Orshikh, N., Tapo, A., Subramani, N., Sokolov, A., Sikasote, C., Setyawan, M., Sarin, S., Samb, S., Sagot, B., Rivera, C., Rios, A., Papadimitriou, I., Osei, S., Suarez, P. O., … Adeyemi, M. (2022). Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets. Transactions of the Association for Computational Linguistics, 10, 50–72. [Link](https://doi.org/10.1162/tacl_a_00447)
- Kummervold, P. E., De la Rosa, J., Wetjen, F., & Brygfjeld, S. A. (2021). Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model. In S. Dobnik & L. Øvrelid (Eds.), Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa) (pp. 20–29). Linköping University Electronic Press, Sweden. [Link](https://aclanthology.org/2021.nodalida-main.3)
- Lewandowska-Tomaszczyk, B., Górski, R., Łaziński, M., & Przepiórkowski, A. (2013). The National Corpus of Polish (NKJP). Language use and data analysis. 309–319.
- Li, R., Allal, L. B., Zi, Y., Muennighoff, N., Kocetkov, D., Mou, C., Marone, M., Akiki, C., Li, J., Chim, J., Liu, Q., Zheltonozhskii, E., Zhuo, T. Y., Wang, T., Dehaene, O., Davaadorj, M., Lamy-Poirier, J., Monteiro, J., Shliazhko, O., … Vries, H. de. (2023). StarCoder: May the source be with you!
- Lison, P., & Tiedemann, J. (2016). OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In N. Calzolari, K. Choukri, T. Declerck, S. Goggi, M. Grobelnik, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16) (pp. 923–929). European Language Resources Association (ELRA). [Link](https://aclanthology.org/L16-1147)
- Ljubešić, N., & Klubička, F. (2014). Bs,hr,srWaC - Web Corpora of Bosnian, Croatian and Serbian. In F. Bildhauer & R. Schäfer (Eds.), Proceedings of the 9th Web as Corpus Workshop (WaC-9) (pp. 29–35). Association for Computational Linguistics. [Link](https://doi.org/10.3115/v1/W14-0405)
- Micallef, K., Gatt, A., Tanti, M., van der Plas, L., & Borg, C. (2022). Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese. Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, 90–101. [Link](https://doi.org/10.18653/v1/2022.deeplo-1.10)
- Ogrodniczuk, M. (2018). Polish Parliamentary Corpus. [Link](https://api.semanticscholar.org/CorpusID:235134113)
- Ostendorff, M., Blume, T., & Ostendorff, S. (2020). Towards an Open Platform for Legal Information. Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, 385–388. [Link](https://doi.org/10.1145/3383583.3398616)
- Ostendorff, M., Suarez, P. O., Lage, L. F., & Rehm, G. (2024). LLM-Datasets: An Open Framework for Pretraining Datasets of Large Language Models. First Conference on Language Modeling. [Link](https://openreview.net/forum?id=5RdIMlGLXL)
- Outsios, S., Skianis, K., Meladianos, P., Xypolopoulos, C., & Vazirgiannis, M. (2018). Word Embeddings from Large-Scale Greek Web content. arXiv Preprint arXiv:1810.06694.
- Palomar-Giner, J., Saiz, J. J., Espuña, F., Mina, M., Da Dalt, S., Llop, J., Ostendorff, M., Ortiz Suarez, P., Rehm, G., Gonzalez-Agirre, A., & Villegas, M. (2024). A CURATEd CATalog: Rethinking the Extraction of Pretraining Corpora for Mid-Resourced Languages. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.), Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 335–349). ELRA and ICCL. [Link](https://aclanthology.org/2024.lrec-main.31)
- Papaloukas, C., Chalkidis, I., Athinaios, K., Pantazi, D.-A., & Koubarakis, M. (2021). Multi-granular Legal Topic Classification on Greek Legislation. Proceedings of the Natural Legal Language Processing Workshop 2021, 63–75. [Link](https://doi.org/10.48550/arXiv.2109.15298)
- Popa-Fabre, M., Ortiz Suárez, P. J., Sagot, B., & de la Clergerie, É. (2020). French Contextualized Word-Embeddings with a sip of CaBeRnet: A New French Balanced Reference Corpus. Proceedings of the 8th Workshop on Challenges in the Management of Large Corpora, 15–23. [Link](https://aclanthology.org/2020.cmlc-1.3)
- Rae, J. W., Potapenko, A., Jayakumar, S. M., Hillier, C., & Lillicrap, T. P. (2019). Compressive Transformers for Long-Range Sequence Modelling. arXiv Preprint. [Link](https://arxiv.org/abs/1911.05507)
- Rodrigues, J., Gomes, L., Silva, J., Branco, A., Santos, R., Cardoso, H. L., & Osório, T. (2023). Advancing Neural Encoding of Portuguese with Transformer Albertina PT-\*.
- Rødven-Eide, S. (2016). The Swedish Culturomics Gigaword CorpusThe Swedish Culturomics Gigaword Corpus [Dataset]. Språkbanken Text. [Link](https://doi.org/10.23695/3WMV-1Z09)
- Sharma, E., Li, C., & Wang, L. (2019). BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization. CoRR, abs/1906.03741. [Link](http://arxiv.org/abs/1906.03741)
- Soldaini, L., & Lo, K. (2023). peS2o (Pretraining Efficiently on S2ORC) Dataset. Allen Institute for AI.
- Strømberg-Derczynski, L., Ciosici, M., Baglini, R., Christiansen, M. H., Dalsgaard, J. A., Fusaroli, R., Henrichsen, P. J., Hvingelby, R., Kirkedal, A., Kjeldsen, A. S., Ladefoged, C., Nielsen, F. Å., Madsen, J., Petersen, M. L., Rystrøm, J. H., & Varab, D. (2021). The Danish Gigaword Corpus. Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa), 413–421. [Link](https://aclanthology.org/2021.nodalida-main.46)
- Subramani, N., Luccioni, S., Dodge, J., & Mitchell, M. (2023). Detecting Personal Information in Training Corpora: An Analysis. 208–220. [Link](https://doi.org/10.18653/v1/2023.trustnlp-1.18)
- Varab, D., & Schluter, N. (2020). DaNewsroom: A Large-scale Danish Summarisation Dataset. Proceedings of The 12th Language Resources and Evaluation Conference, 6731–6739. [Link](https://www.aclweb.org/anthology/2020.lrec-1.831)
- Váradi, T., Nyéki, B., Koeva, S., Tadić, M., Štefanec, V., Ogrodniczuk, M., Nitoń, B., Pezik, P., Barbu Mititelu, V., Irimia, E., Mitrofan, M., Tufi\textcommabelows, D., Garabík, R., Krek, S., & Repar, A. (2022). Introducing the CURLICAT Corpora: Seven-language Domain Specific Annotated Corpora from Curated Sources. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, J. Odijk, & S. Piperidis (Eds.), Proceedings of the Thirteenth Language Resources and Evaluation Conference (pp. 100–108). European Language Resources Association. [Link](https://aclanthology.org/2022.lrec-1.11)
- Wagner Filho, J. A., Wilkens, R., Idiart, M., & Villavicencio, A. (2018). The brwac corpus: A new open resource for brazilian portuguese. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
- Žagar, A., Kavaš, M., Robnik-Šikonja, M., Erjavec, T., Fišer, D., Ljubešić, N., Ferme, M., Borovič, M., Boškovič, B., Ojsteršek, M., & Hrovat, G. (2022). Corpus of academic Slovene KAS 2.0. [Link](http://hdl.handle.net/11356/1448)
- Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. BBQ: A hand-built bias benchmark for question answering. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2086–2105, Dublin, Ireland. Association for Computational Linguistics.
- Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407–3412, Hong Kong, China. Association for Computational Linguistics.
- Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., & Tafjord, O. (2018). Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge. arXiv:1803. 05457v1.
- Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
- Penedo, G., Kydlíček, H., allal, L. B., Lozhkov, A., Mitchell, M., Raffel, C., Von Werra, L., & Wolf, T. (2024). The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale (arXiv:2406.17557). arXiv. http://arxiv.org/abs/2406.17557
- Singh, S., Vargus, F., Dsouza, D., Karlsson, B. F., Mahendiran, A., Ko, W.-Y., Shandilya, H., Patel, J., Mataciunas, D., OMahony, L., Zhang, M., Hettiarachchi, R., Wilson, J., Machado, M., Moura, L. S., Krzemiński, D., Fadaei, H., Ergün, I., Okoh, I., … Hooker, S. (2024). Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning (arXiv:2402.06619). arXiv. http://arxiv.org/abs/2402.06619
</details>
</details>
The model was trained for 3 epochs, with two final rounds of 0.3B higher-quality tokens each,
meaning that the total number of tokens seen during pre-training amounts to roughly 7.8 trillion tokens.
We provide an extense Datasheet section following the best practices defined by [(Gebru et al., 2021)](https://arxiv.org/pdf/1803.09010).
<details>
<summary>Datasheet</summary>
#### Motivation
**For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.**
The purpose of creating this dataset is to pre-train the Salamandra family of multilingual models with high performance in a large number of
European languages (35) and code (including 92 different programming languages). In addition, we aim to represent especially the co-official
languages of Spain: Spanish, Catalan, Galician, and Basque. This is the reason why we carry out an oversampling of these languages.
We detected that there is a great lack of massive multilingual data, especially in minority languages (Ostendorff & Rehm, 2023), so part of
our efforts in the creation of this pre-training dataset have resulted in the contribution to large projects such as the Community OSCAR
(Brack et al., 2024), which includes 151 languages and 40T words, or CATalog (Palomar-Giner et al., 2024), the largest open dataset in
Catalan in the world.
**Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?**
The dataset has been created by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center - Centro Nacional de
Supercomputación (BSC-CNS), which aims to advance the field of natural language processing through cutting-edge research and development
and the use of HPC. In particular, it was created by the unit's data team, the main contributors being Javier Saiz, Ferran Espuña, and
Jorge Palomar.
However, the creation of the dataset would not have been possible without the collaboration of a large number of collaborators, partners,
and public institutions, which can be found in detail in the acknowledgements.
**Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.**
This work/research has been promoted and financed by the Government of Catalonia through the [Aina project](https://projecteaina.cat/).
#### Composition
**What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.**
The dataset consists entirely of text documents in various languages. Specifically, data was mainly sourced from the following databases and
repositories:
- **Common Crawl:** Repository that holds website data and is run by the Common Crawl non-profit organization. It is updated monthly and is
distributed under the CC0 1.0 public domain license.
- **GitHub:** Community platform that allows developers to create, store, manage, and share their code. Repositories are crawled and then
distributed with their original licenses, which may vary from permissive to non-commercial licenses.
- **Wikimedia:** Database that holds the collection databases managed by the Wikimedia Foundation, including Wikipedia, Wikibooks, Wikinews,
Wikiquote, Wikisource, and Wikivoyage. It is updated monthly and is distributed under Creative Commons Attribution-ShareAlike License 4.0.
- **EurLex:** Repository that holds the collection of legal documents from the European Union, available in all of the EU’s 24 official
languages and run by the Publications Office of the European Union. It is updated daily and is distributed under the Creative Commons
Attribution 4.0 International license.
- **Other repositories:** Specific repositories were crawled under permission for domain-specific corpora, which include academic, legal,
and newspaper repositories.
We provide a complete list of dataset sources at the end of this section.
**How many instances are there in total (of each type, if appropriate)?**
The dataset contains a diverse range of instances across multiple languages, with notable adjustments for certain languages. English
represents the largest portion, accounting for 39.08% of the total data. Spanish was upsampled by a factor of 2, bringing its share to 16.59%,
while Catalan (1.84%), Basque (0.26%), and Galician (0.36%) were also upsampled by 2. On the other hand, code-related data was downsampled
by half, making up 6.42% of the total. Other prominent languages include French (6.59%), Russian (5.39%), German (4.25%), and Hungarian
(3.93%), with several additional languages contributing between 1% and 2%, and smaller portions represented by a variety of others.
**Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).**
The dataset is a sample from multiple sources, with different weights based on the primary language of the content: Spanish, Catalan,
Basque, and Galician content was upsampled by a factor of two, while programming languages were downsampled by a factor of half. Other
sources were sampled in proportion to their occurrence.
**What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description.**
Each instance consists of a text document processed for deduplication, language identification, and source-specific filtering. Some
documents required optical character recognition (OCR) to extract text from non-text formats such as PDFs.
**Is there a label or target associated with each instance? If so, please provide a description.**
Each instance is labeled with a unique identifier, the primary language of the content, and the URL for web-sourced instances. Additional
labels were automatically assigned to detect specific types of content —harmful or toxic content— and to assign preliminary indicators of
undesired qualities —very short documents, high density of symbols, etc.— which were used for filtering instances.
**Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.**
No significant information is missing from the instances.
**Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit.**
Instances are related through shared metadata, such as source and language identifiers.
**Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.**
The dataset is split randomly into training, validation, and test sets.
**Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.**
Despite removing duplicated instances within each source, redundancy remains at the paragraph and sentence levels, particularly in
web-sourced instances where SEO techniques and templates contribute to repeated textual patterns. Some instances may also be duplicated
across sources due to format variations.
**Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.**
The dataset is self-contained and does not rely on external resources.
**Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description.**
The dataset does not contain confidential data.
**Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. If the dataset does not relate to people, you may skip the remaining questions in this section.**
The dataset includes web-crawled content, which may overrepresent pornographic material across languages (Kreutzer et al., 2022). Although
pre-processing techniques were applied to mitigate offensive content, the heterogeneity and scale of web-sourced data make exhaustive
filtering challenging, which makes it next to impossible to identify all adult content without falling into excessive filtering, which may
negatively influence certain demographic groups (Dodge et al., 2021).
**Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.**
The dataset does not explicitly identify any subpopulations.
**Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how.**
Web-sourced instances in the dataset may contain personally identifiable information (PII) that is publicly available on the Web, such as
names, IP addresses, email addresses, and phone numbers. While it would be possible to indirectly identify individuals through the
combination of multiple data points, the nature and scale of web data makes it difficult to parse such information. In any case, efforts are
made to filter or anonymize sensitive data during pre-processing, but some identifiable information may remain in the dataset.
**Does the dataset contain data that might be considered sensitive in any way? If so, please provide a description.**
Given that the dataset includes web-sourced content and other publicly available documents, instances may inadvertently reveal financial
information, health-related details, or forms of government identification, such as social security numbers (Subramani et al., 2023),
especially if the content originates from less-regulated sources or user-generated platforms.
#### Collection Process
**How was the data collected?**
This dataset is constituted by combining several sources, whose acquisition methods can be classified into three groups:
- Web-sourced datasets with some preprocessing available under permissive license (p.e. Common Crawl).
- Domain-specific or language-specific raw crawls (p.e. Spanish Crawling).
- Manually curated data obtained through collaborators, data providers (by means of legal assignment agreements) or open source projects
(p.e. CATalog).
**What mechanisms or procedures were used to collect the data? How were these mechanisms or procedures validated?**
According to the three groups previously defined, these are the mechanisms used in each of them:
- Open direct download. Validation: data integrity tests.
- Ad-hoc scrapers or crawlers. Validation: software unit and data integrity tests.
- Direct download via FTP, SFTP, API or S3. Validation: data integrity tests.
**If the dataset is a sample from a larger set, what was the sampling strategy?**
The sampling strategy was to use the whole dataset resulting from the filtering explained in the ‘preprocessing/cleaning/labelling’ section,
with the particularity that an upsampling of 2 (i.e. twice the probability of sampling a document) was performed for the co-official
languages of Spain (Spanish, Catalan, Galician, Basque), and a downsampling of 1/2 was applied for code (half the probability of sampling a
code document, evenly distributed among all programming languages).
**Who was involved in the data collection process and how were they compensated?**
This data is generally extracted, filtered and sampled by automated processes. The code required to run these processes has been developed
entirely by members of the LangTech data team, or otherwise obtained from open-source software. Furthermore, there has been no monetary
consideration for acquiring data from suppliers.
**Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances? If not, please describe the timeframe in which the data associated with the instances was created.**
Data were acquired and processed from April 2023 to April 2024. However, as mentioned, much data has been obtained from open projects such
as Common Crawl, which contains data from 2014, so it is the end date (04/2024) rather than the start date that is important.
**Were any ethical review processes conducted? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.**
No particular ethical review process has been carried out as the data is mostly open and not particularly sensitive. However, we have an
internal evaluation team and a bias team to monitor ethical issues. In addition, we work closely with ‘Observatori d'Ètica en Intel·ligència
Artificial’ (OEIAC) and ‘Agencia Española de Supervisión de la Inteligencia Artificial’ (AESIA) to audit the processes we carry out from an
ethical and legal point of view, respectively.
#### Preprocessing
**Was any preprocessing/cleaning/labeling of the data done? If so, please provide a description. If not, you may skip the remaining questions in this section.**
Instances of text documents were not altered, but web-sourced documents were filtered based on specific criteria along two dimensions:
- Quality: documents with a score lower than 0.8, based on undesired qualities, such as documents with low number of lines, very short
sentences, presence of long footers and headers, and high percentage of punctuation, obtained through CURATE (Palomar-Giner et al., 2024)
were filtered out.
- Harmful or adult content: documents originating from Colossal OSCAR were filtered using LLM-Datasets (Ostendorff et al., 2024) based on
the perplexity from a language model (‘harmful_pp’ field) provided by the Ungoliant pipeline (Abadji et al., 2021).
**Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data? If so, please provide a link or other access point to the “raw” data.**
The original raw data was not kept.
**Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point.**
Yes, the preprocessing and filtering software is open-sourced. The [CURATE](https://github.com/langtech-bsc/CURATE) pipeline was used for Spanish Crawling and CATalog,
and the [Ungoliant](https://github.com/oscar-project/ungoliant) pipeline was used for the OSCAR project.
#### Uses
**Has the dataset been used for any tasks already? If so, please provide a description.**
Pre-train the Salamandra model family.
**What (other) tasks could the dataset be used for?**
The data can be used primarily to pre-train other language models, which can then be used for a wide range of use cases. The dataset could
also be used for other tasks such as fine-tuning language models, cross-lingual NLP tasks, machine translation, domain-specific text
generation, and language-specific data analysis.
**Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? Is there anything a dataset consumer could do to mitigate these risks or harms?**
Web-crawled content is over-represented with standard language varieties, impacting language model performance for minority languages.
Language diversity in data is crucial to avoid bias, especially in encoding non-standard dialects, preventing the exclusion of demographic
groups. Moreover, despite legal uncertainties in web-scraped data, we prioritize permissive licenses and privacy protection measures,
acknowledging the challenges posed by personally identifiable information (PII) within large-scale datasets. Our ongoing efforts aim to
address privacy concerns and contribute to a more inclusive linguistic dataset.
**Are there tasks for which the dataset should not be used?**
-
#### Distribution
**Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset was created? If so, please provide a description.**
The dataset will not be released or distributed to third parties. Any related question to distribution is omitted in this section.
#### Maintenance
**Who will be supporting/hosting/maintaining the dataset?**
The dataset will be hosted by the Language Technologies unit (LangTech) of the Barcelona Supercomputing Center (BSC). The team will ensure
regular updates and monitor the dataset for any issues related to content integrity, legal compliance, and bias for the sources they are
responsible for.
**How can the owner/curator/manager of the dataset be contacted?**
The data owner may be contacted with the email address [email protected].
**Will the dataset be updated?**
The dataset will not be updated.
**If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances? If so, please describe these limits and explain how they will be enforced.**
The dataset does not keep sensitive data that could allow direct identification of individuals, apart from the data that is publicly
available in web-sourced content. Due to the sheer volume and diversity of web data, it is not feasible to notify individuals or manage data
retention on an individual basis. However, efforts are made to mitigate the risks associated with sensitive information through
pre-processing and filtering to remove identifiable or harmful content. Despite these measures, vigilance is maintained to address potential
privacy and ethical issues.
**Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers.**
Since the dataset will not be updated, only the final version will be kept.
**If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?**
The dataset does not allow for external contributions.
</details>
---
## Evaluation
Evaluation is done using the Language Model Evaluation Harness (Gao et al., 2024). We evaluate on a set of tasks taken from [SpanishBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/spanish_bench), [CatalanBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/catalan_bench), [BasqueBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/basque_bench) and [GalicianBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/galician_bench). We also use English tasks already available on the LM Evaluation Harness. These benchmarks include both new and existing tasks and datasets. In the tables below, we include the results in a selection of evaluation datasets that represent model's performance across a variety of tasks within these benchmarks.
We only use tasks that are either human generated, human translated, or with a strong human-in-the-loop (i.e., machine translation followed by professional revision or machine generation followed by human revision and annotation). This is the reason behind the variety in number of tasks reported across languages. As more tasks that fulfill these requirements are published, we will update the presented results. We also intend to expand the evaluation to other languages, as long as the datasets meet our quality standards.
During the implementation of the evaluation we observed a series of issues worth considering when replicating and interpreting the results presented. These issues include ≈1.5% variances in performance in some tasks depending on the version of the `transformers` library used, and depending on the use (or lack of use) of tensor parallelism when loading a model. When implementing existing tasks, we carry out a comprehensive quality evaluation of the dataset, the Harness task itself, and what kind of input models see during evaluation. Our implementation (see links above) addresses multiple existing problems such as errors in datasets and prompts, and lack of pre-processing. All this means that results will vary if using other Harness implementations, and may slightly vary depending on the replication setup.
It should be noted that these results are subject to all the drawbacks of every current gold-standard evaluation, and that the figures do not fully represent the models capabilities and potential. We thus advise caution when reading and interpreting the results.
A full list of results compared to other baselines, a discussion of the model's performance across tasks and its implications, and details regarding problem-solving with task implementation will soon be available in the technical report.
All results reported below are on a 5-shot setting.
#### Spanish
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td>Commonsense Reasoning</td>
<td>xstorycloze_es</td>
<td>acc</td>
<td>64.92</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli_es</td>
<td>acc</td>
<td>54.93</td>
</tr>
<tr>
<td>xnli_es</td>
<td>acc</td>
<td>44.98</td>
</tr>
<tr>
<td>Paraphrasing</td>
<td>paws_es</td>
<td>acc</td>
<td>52.05</td>
</tr>
<tr>
<td>QA</td>
<td>xquad_es</td>
<td>acc</td>
<td>54.32</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_es</td>
<td>bleu</td>
<td>11.46</td>
</tr>
</tbody>
</table>
#### Catalan
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Commonsense Reasoning</td>
<td>copa_ca</td>
<td>acc</td>
<td>68.80</td>
</tr>
<tr>
<td>xstorycloze_ca</td>
<td>acc</td>
<td>65.72</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli_ca</td>
<td>acc</td>
<td>56.34</td>
</tr>
<tr>
<td>xnli_ca</td>
<td>acc</td>
<td>48.07</td>
</tr>
<tr>
<td rowspan="2">Paraphrasing</td>
<td>parafraseja</td>
<td>acc</td>
<td>58.55</td>
</tr>
<tr>
<td>paws_ca</td>
<td>acc</td>
<td>55.15</td>
</tr>
<tr>
<td rowspan="5">QA</td>
<td>arc_ca_easy</td>
<td>acc</td>
<td>54.76</td>
</tr>
<tr>
<td>arc_ca_challenge</td>
<td>acc</td>
<td>30.55</td>
</tr>
<tr>
<td>openbookqa_ca</td>
<td>acc</td>
<td>27.40</td>
</tr>
<tr>
<td>piqa_ca</td>
<td>acc</td>
<td>62.89</td>
</tr>
<tr>
<td>siqa_ca</td>
<td>acc</td>
<td>41.91</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_ca</td>
<td>bleu</td>
<td>14.70</td>
</tr>
</tbody></table>
#### Basque
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Commonsense Reasoning</td>
<td>xcopa_eu</td>
<td>acc</td>
<td>55.60</td>
</tr>
<tr>
<td>xstorycloze_eu</td>
<td>acc</td>
<td>57.64</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli_eu</td>
<td>acc</td>
<td>56.34</td>
</tr>
<tr>
<td>xnli_eu</td>
<td>acc</td>
<td>39.78</td>
</tr>
<tr>
<td rowspan="3">QA</td>
<td>eus_exams</td>
<td>acc</td>
<td>23.72</td>
</tr>
<tr>
<td>eus_proficiency</td>
<td>acc</td>
<td>23.37</td>
</tr>
<tr>
<td>eus_trivia</td>
<td>acc</td>
<td>27.58</td>
</tr>
<tr>
<td>Reading Comprehension</td>
<td>eus_reading</td>
<td>acc</td>
<td>27.84</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_eu</td>
<td>bleu</td>
<td>3.58</td>
</tr>
</tbody></table>
#### Galician
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Paraphrasing</td>
<td>parafrases_gl</td>
<td>acc</td>
<td>54.08</td>
</tr>
<tr>
<td>paws_gl</td>
<td>acc</td>
<td>53.30</td>
</tr>
<tr>
<td>QA</td>
<td>openbookqa_gl</td>
<td>acc</td>
<td>30.80</td>
</tr>
<tr>
<td>Translation</td>
<td>flores_gl</td>
<td>bleu</td>
<td>12.86</td>
</tr>
</tbody>
</table>
#### English
<table><thead>
<tr>
<th>Category</th>
<th>Task</th>
<th>Metric</th>
<th>Result</th>
</tr></thead>
<tbody>
<tr>
<td rowspan="2">Commonsense Reasoning</td>
<td>copa</td>
<td>acc</td>
<td>83.00</td>
</tr>
<tr>
<td>xstorycloze_en</td>
<td>acc</td>
<td>73.06</td>
</tr>
<tr>
<td rowspan="2">NLI</td>
<td>wnli</td>
<td>acc</td>
<td>56.34</td>
</tr>
<tr>
<td>xnli_en</td>
<td>acc</td>
<td>47.35</td>
</tr>
<tr>
<td>Paraphrasing</td>
<td>paws *</td>
<td>acc</td>
<td>55.95</td>
</tr>
<tr>
<td rowspan="6">QA</td>
<td>arc_easy</td>
<td>acc</td>
<td>74.07</td>
</tr>
<tr>
<td>arc_challenge</td>
<td>acc</td>
<td>37.63</td>
</tr>
<tr>
<td>openbookqa</td>
<td>acc</td>
<td>28.00</td>
</tr>
<tr>
<td>piqa</td>
<td>acc</td>
<td>74.86</td>
</tr>
<tr>
<td>social_iqa</td>
<td>acc</td>
<td>46.62</td>
</tr>
<tr>
<td>squad_en **</td>
<td>acc</td>
<td>44.38</td>
</tr>
</tbody></table>
\* Current LM Evaluation Harness implementation is lacking correct pre-processing. These results are obtained with adequate pre-processing.
\*\* This task is not yet available in the official Harness, we hope to add it soon.
---
## Ethical Considerations and Limitations
We examine the presence of undesired societal and cognitive biases present in this model using different benchmarks. For societal biases, we test performance using the BBQ dataset (Parrish et al., 2022) in the original English and the Regard dataset (Sheng et al., 2019). We report inadequate accuracies in both ambiguous and disambiguated contexts, which is indicative of the presence of societal biases which need to be addressed in post-training phases.
Our cognitive bias analysis focuses on positional effects in 0-shot settings, and majority class bias in few-shot settings. For positional effects, we leverage the ARC Multiple Choice Question dataset (Clark et al., 2018). We observe moderate to strong to very strong primacy effects, whereby the model shows a preference for answers towards the beginning of the list of provided answers. We measure effects of majority class effects in few-shot settings using SST-2 (Socher et al., 2013). We detect moderate effects, implying that outputs can be influenced by the prompts.
Our analyses of these biases are by no means exhaustive and are limited by the relative scarcity of adequate resources in all languages present in the training data. We aim to gradually extend and expand our analyses in future work.
We highlight that these results can be expected from a pretrained model that has not yet been instruction-tuned or aligned. These tests are performed in order to show the biases the model may contain. We urge developers to take them into account and perform safety testing and tuning tailored to their specific applications of the model.
---
## Additional information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <[email protected]>.
### Copyright
Copyright(c) 2024 by Language Technologies Unit, Barcelona Supercomputing Center.
### Funding
This work has been promoted and financed by the Government of Catalonia through the [Aina Project](https://projecteaina.cat/).
This work is funded by the _Ministerio para la Transformación Digital y de la Función Pública_ - Funded by EU – NextGenerationEU
within the framework of [ILENIA Project](https://proyectoilenia.es/) with reference 2022/TL22/00215337.
### Acknowledgements
This project has benefited from the contributions of numerous teams and institutions, mainly through data contributions, knowledge transfer or technical support.
In Catalonia, many institutions have been involved in the project. Our thanks to Òmnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, Racó Català, Vilaweb, ACN, Nació Digital, El món and Aquí Berguedà.
At national level, we are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, Fundación Dialnet, Fundación Elcano and the ‘Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)’ of the University of Las Palmas de Gran Canaria.
At the international level, we thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration. We would also like to give special thanks to the NVIDIA team, with whom we have met regularly, specially to: Ignacio Sarasua, Adam Henryk Grzywaczewski, Oleg Sudakov, Sergio Perez, Miguel Martinez, Felipes Soares and Meriem Bendris. Their constant support has been especially appreciated throughout the entire process.
Their valuable efforts have been instrumental in the development of this work.
### Disclaimer
Be aware that the model may contain biases or other unintended distortions.
When third parties deploy systems or provide services based on this model, or use the model themselves,
they bear the responsibility for mitigating any associated risks and ensuring compliance with applicable regulations,
including those governing the use of Artificial Intelligence.
The Barcelona Supercomputing Center, as the owner and creator of the model, shall not be held liable for any outcomes resulting from third-party use.
### Citation
Technical report and paper coming soon.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Model Index
|Model|Base|Instruct|
|:---:|:---:|:---:|
|2B| [Link](https://huggingface.co/BSC-LT/salamandra-2b) | [Link](https://huggingface.co/BSC-LT/salamandra-2b-instruct) |
|7B| [Link](https://huggingface.co/BSC-LT/salamandra-7b) | [Link](https://huggingface.co/BSC-LT/salamandra-7b-instruct) |
|40B| WiP | WiP |
| [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION",
"PARAPHRASING"
] | [
"BEAR",
"SCIELO"
] | Non_BioNLP |
fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-478897 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Science",
"Verification",
"Dataset",
"Research",
"AI",
"en",
"dataset:fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-478897",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716 | 1,716 | 8 | 0 | ---
datasets:
- fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-478897
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Science
- Verification
- Dataset
- Research
- AI
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
scientific claim verification
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-478897',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | [
"SCIFACT"
] | Non_BioNLP |
RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,720 | 1,721 | 51 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
UNAversal-8x7B-v1beta - GGUF
- Model creator: https://huggingface.co/fblgit/
- Original model: https://huggingface.co/fblgit/UNAversal-8x7B-v1beta/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [UNAversal-8x7B-v1beta.Q2_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q2_K.gguf) | Q2_K | 16.12GB |
| [UNAversal-8x7B-v1beta.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [UNAversal-8x7B-v1beta.IQ3_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [UNAversal-8x7B-v1beta.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [UNAversal-8x7B-v1beta.IQ3_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [UNAversal-8x7B-v1beta.Q3_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q3_K.gguf) | Q3_K | 21.0GB |
| [UNAversal-8x7B-v1beta.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [UNAversal-8x7B-v1beta.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [UNAversal-8x7B-v1beta.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [UNAversal-8x7B-v1beta.Q4_0.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_0.gguf) | Q4_0 | 24.63GB |
| [UNAversal-8x7B-v1beta.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [UNAversal-8x7B-v1beta.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [UNAversal-8x7B-v1beta.Q4_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_K.gguf) | Q4_K | 26.49GB |
| [UNAversal-8x7B-v1beta.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [UNAversal-8x7B-v1beta.Q4_1.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_1.gguf) | Q4_1 | 27.32GB |
| [UNAversal-8x7B-v1beta.Q5_0.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_0.gguf) | Q5_0 | 30.02GB |
| [UNAversal-8x7B-v1beta.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [UNAversal-8x7B-v1beta.Q5_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_K.gguf) | Q5_K | 30.95GB |
| [UNAversal-8x7B-v1beta.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [UNAversal-8x7B-v1beta.Q5_1.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_1.gguf) | Q5_1 | 32.71GB |
| [UNAversal-8x7B-v1beta.Q6_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q6_K.gguf) | Q6_K | 35.74GB |
| [UNAversal-8x7B-v1beta.Q8_0.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
language:
- en
license: cc-by-nc-sa-4.0
library_name: transformers
tags:
- UNA
- juanako
- mixtral
- MoE
model-index:
- name: UNAversal-8x7B-v1beta
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.97
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
---
# UNAversal - Uniform Neural Alignment (MoE)
This is just a beta, a first release so people can start working on franksteins and so.
It does achieve high GSM/Math and TQA, so ideally you can merge it with other mixtrals and see what coming out of it
Based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
## UNA Details
For this model we came out with the most obvious, placing UNA on the router_logit. It does work, but we saw a much better performance on SFT by doing so.
So this model DOES have UNA-SFT phase, its highly experimental and it was merely using LLaMA-Factory datasets by example alpaca.
As the others:
- Can be finetuned further, try 2e-5 or **1e-4 (since its MOE)**
- Can be merged, here you will have to improvise and please report findings on a discussion thread.
**REMINDER**: please.. cite, it does help on the research and the lab itself, seriously.
## NEED YOUR HELP!!
I need a multi-turn trainloop for the Mixtral, that can squeeze the juice out of 8xH100's properly. Please feel free to reach @fblgit either discord or twitter. thanks!
# Evals
Here there are some, but we also submitted it to the HF eval queue....
## GSM8k 5-Shot
```
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.6603|± | 0.013|
```
## ARC 25-Shot
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml |none | 25|acc |0.6621|± |0.0138|
| | |none | 25|acc_norm|0.6962|± |0.0134|
```
## TruthfulQA 0-Shot (MC2)
```
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|-------|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7122|± |0.0141|
```
## 0-Shots Evals
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|-------|------|-----:|----------|-----:|---|-----:|
|arc_challenge |Yaml |none | 0|acc |0.6101|± |0.0143|
| | |none | 0|acc_norm |0.6425|± |0.0140|
|arc_easy |Yaml |none | 0|acc |0.8615|± |0.0071|
| | |none | 0|acc_norm |0.8375|± |0.0076|
|boolq |Yaml |none | 0|acc |0.8624|± |0.0060|
|lambada_openai|Yaml |none | 0|perplexity|2.8318|± |0.0507|
| | |none | 0|acc |0.7650|± |0.0059|
|mathqa |Yaml |none | 0|acc |0.4472|± |0.0091|
| | |none | 0|acc_norm |0.4436|± |0.0091|
|piqa |Yaml |none | 0|acc |0.8292|± |0.0088|
| | |none | 0|acc_norm |0.8422|± |0.0085|
|pubmedqa |Yaml |none | 0|acc |0.7920|± |0.0182|
|sciq |Yaml |none | 0|acc |0.9630|± |0.0060|
| | |none | 0|acc_norm |0.9370|± |0.0077|
```
## BBH at 0-Shot
```
vllm (pretrained=fblgit/UNAversal-8x7B-v1beta,tensor_parallel_size=2,data_parallel_size=4,gpu_memory_utilization=0.8,dtype=float16), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: auto
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|± |0.1772|
| - bbh_cot_fewshot_boolean_expressions |Yaml |get-answer| 0|exact_match|0.8840|± |0.0203|
| - bbh_cot_fewshot_causal_judgement |Yaml |get-answer| 0|exact_match|0.6417|± |0.0352|
| - bbh_cot_fewshot_date_understanding |Yaml |get-answer| 0|exact_match|0.7600|± |0.0271|
| - bbh_cot_fewshot_disambiguation_qa |Yaml |get-answer| 0|exact_match|0.7160|± |0.0286|
| - bbh_cot_fewshot_dyck_languages |Yaml |get-answer| 0|exact_match|0.1800|± |0.0243|
| - bbh_cot_fewshot_formal_fallacies |Yaml |get-answer| 0|exact_match|0.6520|± |0.0302|
| - bbh_cot_fewshot_geometric_shapes |Yaml |get-answer| 0|exact_match|0.3880|± |0.0309|
| - bbh_cot_fewshot_hyperbaton |Yaml |get-answer| 0|exact_match|0.9600|± |0.0124|
| - bbh_cot_fewshot_logical_deduction_five_objects |Yaml |get-answer| 0|exact_match|0.5360|± |0.0316|
| - bbh_cot_fewshot_logical_deduction_seven_objects |Yaml |get-answer| 0|exact_match|0.5040|± |0.0317|
| - bbh_cot_fewshot_logical_deduction_three_objects |Yaml |get-answer| 0|exact_match|0.8600|± |0.0220|
| - bbh_cot_fewshot_movie_recommendation |Yaml |get-answer| 0|exact_match|0.7840|± |0.0261|
| - bbh_cot_fewshot_multistep_arithmetic_two |Yaml |get-answer| 0|exact_match|0.6600|± |0.0300|
| - bbh_cot_fewshot_navigate |Yaml |get-answer| 0|exact_match|0.8160|± |0.0246|
| - bbh_cot_fewshot_object_counting |Yaml |get-answer| 0|exact_match|0.8360|± |0.0235|
| - bbh_cot_fewshot_penguins_in_a_table |Yaml |get-answer| 0|exact_match|0.7329|± |0.0367|
| - bbh_cot_fewshot_reasoning_about_colored_objects |Yaml |get-answer| 0|exact_match|0.8120|± |0.0248|
| - bbh_cot_fewshot_ruin_names |Yaml |get-answer| 0|exact_match|0.4440|± |0.0315|
| - bbh_cot_fewshot_salient_translation_error_detection |Yaml |get-answer| 0|exact_match|0.5200|± |0.0317|
| - bbh_cot_fewshot_snarks |Yaml |get-answer| 0|exact_match|0.7135|± |0.0340|
| - bbh_cot_fewshot_sports_understanding |Yaml |get-answer| 0|exact_match|0.9400|± |0.0151|
| - bbh_cot_fewshot_temporal_sequences |Yaml |get-answer| 0|exact_match|0.7560|± |0.0272|
| - bbh_cot_fewshot_tracking_shuffled_objects_five_objects |Yaml |get-answer| 0|exact_match|0.5680|± |0.0314|
| - bbh_cot_fewshot_tracking_shuffled_objects_seven_objects|Yaml |get-answer| 0|exact_match|0.6280|± |0.0306|
| - bbh_cot_fewshot_tracking_shuffled_objects_three_objects|Yaml |get-answer| 0|exact_match|0.6280|± |0.0306|
| - bbh_cot_fewshot_web_of_lies |Yaml |get-answer| 0|exact_match|0.9560|± |0.0130|
| - bbh_cot_fewshot_word_sorting |Yaml |get-answer| 0|exact_match|0.3800|± |0.0308|
|Groups|Version| Filter |n-shot| Metric |Value | |Stderr|
|------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|± |0.1772|
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__UNAversal-8x7B-v1beta)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.78|
|AI2 Reasoning Challenge (25-Shot)|69.80|
|HellaSwag (10-Shot) |86.90|
|MMLU (5-Shot) |70.39|
|TruthfulQA (0-shot) |71.97|
|Winogrande (5-shot) |82.00|
|GSM8k (5-shot) |61.64|
| [
"TRANSLATION"
] | [
"PUBMEDQA",
"SCIQ"
] | Non_BioNLP |
croissantllm/base_75k | croissantllm | text2text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705 | 1,706 | 8 | 0 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
license: mit
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (75k steps)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 75k steps (1.18 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
Coming soon
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/base_75k"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.
He is heading to the market. -> Il va au marché.
We are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
```
| [
"TRANSLATION"
] | [
"CRAFT"
] | Non_BioNLP |
rjnClarke/bgem3-shakespeare_st_3 | rjnClarke | sentence-similarity | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10352",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,721 | 1,721 | 6 | 0 | ---
base_model: BAAI/bge-m3
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10352
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Othello requests for Desdemona to be brought to the Sagittary to
speak on his behalf in front of the Duke.
sentences:
- "Or with some dram conjured to this effect,\n He wrought upon her. DUKE. \
\ To vouch this is no proof, Without more certain and more\
\ overt test Than these thin habits and poor likelihoods Of modern seeming\
\ do prefer against him. FIRST SENATOR. But, Othello, speak. Did you by indirect\
\ and forced courses Subdue and poison this young maid's affections? Or\
\ came it by request, and such fair question As soul to soul affordeth? OTHELLO.\
\ I do beseech you, Send for the lady to the Sagittary,\
\ And let her speak of me before her father. If you do find me foul in her\
\ report, The trust, the office I do hold of you, Not only take away,\
\ but let your sentence Even fall upon my life. DUKE. \
\ Fetch Desdemona hither. OTHELLO. Ancient, conduct them; you best know the place.\
\ Exeunt Iago and Attendants. And\
\ till she come, as truly as to heaven I do confess the vices of my blood,\
\ So justly to your grave ears I'll present How I did thrive in this fair\
\ lady's love And she in mine. DUKE. Say it, Othello. OTHELLO. Her father\
\ loved me, oft invited me, Still question'd me the story of my life From\
\ year to year, the battles, sieges, fortunes, That I have pass'd. I ran\
\ it through, even from my boyish days To the very moment that he bade me tell\
\ it: Wherein I spake of most disastrous chances,\n Of moving accidents\
\ by flood and field,\n"
- "Enter Lodovico, Desdemona, and Attendants. LODOVICO. God save the worthy general!\n\
\ OTHELLO. With all my heart, sir. LODOVICO.\
\ The Duke and Senators of Venice greet you. \
\ Gives him a letter. OTHELLO. I kiss the instrument of their\
\ pleasures. Opens the letter, and reads.\
\ DESDEMONA. And what's the news, good cousin Lodovico? IAGO. I am very glad\
\ to see you, signior; Welcome to Cyprus. LODOVICO. I thank you. How does\
\ Lieutenant Cassio? IAGO. Lives, sir. DESDEMONA. Cousin, there's fall'n between\
\ him and my lord An unkind breech; but you shall make all well. OTHELLO.\
\ Are you sure of that? DESDEMONA. My lord? OTHELLO. [Reads.] \"This fail you\
\ not to do, as you will-\" LODOVICO. He did not call; he's busy in the paper.\
\ Is there division 'twixt my lord and Cassio? DESDEMONA. A most unhappy one.\
\ I would do much To atone them, for the love I bear to Cassio. OTHELLO. Fire\
\ and brimstone! DESDEMONA. My lord? OTHELLO. Are you wise? DESDEMONA. What,\
\ is he angry? LODOVICO. May be the letter moved him; \
\ For, as I think, they do command him home, Deputing Cassio in his government.\
\ DESDEMONA. By my troth, I am glad on't. OTHELLO. \
\ Indeed! DESDEMONA. My lord?\n \
\ OTHELLO. I am glad to see you mad.\n"
- "HORTENSIO. Would all the world but he had quite forsworn!\n For me, that I\
\ may surely keep mine oath, I will be married to a wealtlly widow Ere three\
\ days pass, which hath as long lov'd me As I have lov'd this proud disdainful\
\ haggard. And so farewell, Signior Lucentio. Kindness in women, not their\
\ beauteous looks, Shall win my love; and so I take my leave, In resolution\
\ as I swore before. Exit TRANIO. Mistress Bianca, bless\
\ you with such grace As 'longeth to a lover's blessed case! Nay, I have\
\ ta'en you napping, gentle love, And have forsworn you with Hortensio. BIANCA.\
\ Tranio, you jest; but have you both forsworn me? TRANIO. Mistress, we have.\
\ LUCENTIO. Then we are rid of Licio. TRANIO. I' faith, he'll have a lusty widow\
\ now, That shall be woo'd and wedded in a day. BIANCA. God give him joy!\
\ TRANIO. Ay, and he'll tame her. BIANCA. He says so, Tranio. TRANIO. Faith,\
\ he is gone unto the taming-school. BIANCA. The taming-school! What, is there\
\ such a place? TRANIO. Ay, mistress; and Petruchio is the master, That teacheth\
\ tricks eleven and twenty long, To tame a shrew and charm her chattering tongue.\n\
\ Enter BIONDELLO BIONDELLO. O master, master, have watch'd\
\ so long\n That I am dog-weary; but at last I spied An ancient angel coming\
\ down the hill Will serve the turn. TRANIO. What is he, Biondello? BIONDELLO.\
\ Master, a mercatante or a pedant,\n I know not what; but formal in apparel,\n\
\ In gait and countenance surely like a father. LUCENTIO. And what of him,\
\ Tranio? TRANIO. If he be credulous and trust my tale, I'll make him glad\
\ to seem Vincentio, And give assurance to Baptista Minola As if he were\
\ the right Vincentio. Take in your love, and then let me alone. \
\ Exeunt LUCENTIO and BIANCA\n \
\ Enter a PEDANT PEDANT. God save you, sir!\n"
- source_sentence: In "King Lear," characters are embroiled in a war, with shifting
loyalties and power struggles, leading to complex relationships and betrayals.
sentences:
- "III.\nA churchyard.\nEnter Claudio, Don Pedro, and three or four with tapers,\n\
[followed by Musicians].\n Claud. Is this the monument of Leonato?\n Lord. It\
\ is, my lord.\n Claud. [reads from a scroll]\n Epitaph.\
\ Done to death by slanderous tongues\n Was the Hero that here\
\ lies. Death, in guerdon of her wrongs, Gives her fame which\
\ never dies. So the life that died with shame Lives in death with\
\ glorious fame.\n Hang thou there upon the tomb,\n \
\ [Hangs up the scroll.] Praising her when I am dumb. \
\ Now, music, sound, and sing your solemn hymn.\n Song.\
\ Pardon, goddess of the night,\n Those that slew thy virgin knight;\
\ For the which, with songs of woe, Round about her tomb they go.\
\ Midnight, assist our moan, Help us to sigh and groan \
\ Heavily, heavily, Graves, yawn and yield your dead, Till death\
\ be uttered Heavily, heavily.\n Claud. Now unto thy bones good night!\n\
\ Yearly will I do this rite. Pedro. Good morrow, masters. Put your torches\
\ out. The wolves have prey'd, and look, the gentle day, Before the wheels\
\ of Phoebus, round about Dapples the drowsy east with spots of grey.\n \
\ Thanks to you all, and leave us. Fare you well.\n Claud. Good morrow, masters.\
\ Each his several way. Pedro. Come, let us hence and put on other weeds, \
\ And then to Leonato's we will go. Claud. And Hymen now with luckier issue speeds\
\ Than this for whom we rend'red up this woe. Exeunt.\n"
- "And fortune led you well. You have the captives Who were the opposites of\
\ this day's strife. We do require them of you, so to use them As we shall\
\ find their merits and our safety May equally determine. Edm. Sir, I thought\
\ it fit To send the old and miserable King To some retention and appointed\
\ guard; Whose age has charms in it, whose title more, To pluck the\
\ common bosom on his side And turn our impress'd lances in our eyes Which\
\ do command them. With him I sent the Queen, My reason all the same; and\
\ they are ready To-morrow, or at further space, t' appear Where you shall\
\ hold your session. At this time We sweat and bleed: the friend hath lost\
\ his friend; And the best quarrels, in the heat, are curs'd By those\
\ that feel their sharpness. The question of Cordelia and her father Requires\
\ a fitter place. Alb. Sir, by your patience, I hold you but a subject of\
\ this war, Not as a brother. Reg. That's as we list to grace him. Methinks\
\ our pleasure might have been demanded Ere you had spoke so far. He led our\
\ powers, Bore the commission of my place and person, The which immediacy\
\ may well stand up\n And call itself your brother.\n"
- "Her infinite cunning with her modern grace\n Subdu'd me to her rate. She got\
\ the ring; And I had that which any inferior might At market-price have\
\ bought. DIANA. I must be patient. You that have turn'd off a first so noble\
\ wife May justly diet me. I pray you yet- Since you lack virtue, I will\
\ lose a husband- Send for your ring, I will return it home, And give me\
\ mine again. BERTRAM. I have it not. KING. What ring was yours, I pray you?\
\ DIANA. Sir, much like The same upon your finger. KING. Know you this ring?\
\ This ring was his of late. DIANA. And this was it I gave him, being abed. \
\ KING. The story, then, goes false you threw it him Out of a casement. DIANA.\
\ I have spoke the truth.\n Enter PAROLLES\n BERTRAM.\
\ My lord, I do confess the ring was hers. KING. You boggle shrewdly; every feather\
\ starts you. Is this the man you speak of? DIANA. Ay, my lord. KING. Tell\
\ me, sirrah-but tell me true I charge you, Not fearing the displeasure of\
\ your master, Which, on your just proceeding, I'll keep off- By him and\
\ by this woman here what know you? PAROLLES. So please your Majesty, my master\
\ hath been an honourable gentleman; tricks he hath had in him, which gentlemen\
\ have. KING. Come, come, to th' purpose. Did he love this woman? PAROLLES.\
\ Faith, sir, he did love her; but how? KING. How, I pray you?\n PAROLLES.\
\ He did love her, sir, as a gentleman loves a woman.\n"
- source_sentence: King Edward expresses a desire for peace and reconciliation with
all those present, including Queen Elizabeth, Lord Rivers, and Dorset.
sentences:
- "But what is not.\n BANQUO. Look, how our partner's rapt. MACBETH. [Aside.]\
\ If chance will have me King, why, chance may crown me Without my stir.\
\ BANQUO. New honors come upon him, Like our strange garments, cleave not\
\ to their mould But with the aid of use. MACBETH. [Aside.] Come what come\
\ may, Time and the hour runs through the roughest day. BANQUO. Worthy Macbeth,\
\ we stay upon your leisure. MACBETH. Give me your favor; my dull brain was wrought\
\ With things forgotten. Kind gentlemen, your pains Are register'd where\
\ every day I turn The leaf to read them. Let us toward the King. Think\
\ upon what hath chanced, and at more time, The interim having weigh'd it,\
\ let us speak Our free hearts each to other. BANQUO. Very gladly. MACBETH.\
\ Till then, enough. Come, friends. Exeunt.\n"
- "Have aught committed that is hardly borne\n To any in this presence, I desire\
\ To reconcile me to his friendly peace: 'Tis death to me to be at enmity;\
\ I hate it, and desire all good men's love. First, madam, I entreat true\
\ peace of you, Which I will purchase with my duteous service; Of you, my\
\ noble cousin Buckingham, If ever any grudge were lodg'd between us; \
\ Of you, and you, Lord Rivers, and of Dorset, That all without desert have\
\ frown'd on me; Of you, Lord Woodville, and, Lord Scales, of you; Dukes,\
\ earls, lords, gentlemen-indeed, of all. I do not know that Englishman alive\
\ With whom my soul is any jot at odds More than the infant that is born\
\ to-night. I thank my God for my humility. QUEEN ELIZABETH. A holy day shall\
\ this be kept hereafter. I would to God all strifes were well compounded.\
\ My sovereign lord, I do beseech your Highness To take our brother Clarence\
\ to your grace. GLOUCESTER. Why, madam, have I off'red love for this, To\
\ be so flouted in this royal presence? Who knows not that the gentle Duke\
\ is dead? [They all start] \
\ You do him injury to scorn his corse. KING EDWARD. Who knows not he is dead!\
\ Who knows he is? QUEEN ELIZABETH. All-seeing heaven, what a world is this!\
\ BUCKINGHAM. Look I so pale, Lord Dorset, as the rest? DORSET. Ay, my good\
\ lord; and no man in the presence But his red colour hath forsook his cheeks.\
\ KING EDWARD. Is Clarence dead? The order was revers'd. GLOUCESTER. But he,\
\ poor man, by your first order died, And that a winged Mercury did bear;\n\
\ Some tardy cripple bare the countermand\n That came too lag to see him\
\ buried. God grant that some, less noble and less loyal, Nearer in bloody\
\ thoughts, an not in blood, Deserve not worse than wretched Clarence did,\
\ And yet go current from suspicion!\n"
- "OTHELLO. Amen to that, sweet powers!\n I cannot speak enough\
\ of this content; It stops me here; it is too much of joy. And this, and\
\ this, the greatest discords be Kisses her. That e'er our hearts\
\ shall make! IAGO. [Aside.] O, you are well tuned now! \
\ But I'll set down the pegs that make this music, As honest as I am. OTHELLO.\
\ Come, let us to the castle. News, friends: our wars are done,\
\ the Turks are drown'd. How does my old acquaintance of this isle? Honey,\
\ you shall be well desired in Cyprus; I have found great love amongst them.\
\ O my sweet, I prattle out of fashion, and I dote In mine own comforts.\
\ I prithee, good Iago, Go to the bay and disembark my coffers. Bring thou\
\ the master to the citadel; He is a good one, and his worthiness Does challenge\
\ much respect. Come, Desdemona, Once more well met at Cyprus. \
\ Exeunt all but Iago and Roderigo. IAGO. Do thou meet\
\ me presently at the harbor. Come hither. If thou be'st valiant- as they say\
\ base men being in love have then a nobility in their natures more than is\
\ native to them- list me. The lieutenant tonight watches on the court of guard.\
\ First, I must tell thee this: Desdemona is directly in love with him. RODERIGO.\
\ With him? Why, 'tis not possible. IAGO. Lay thy finger thus, and let thy soul\
\ be instructed. Mark me with what violence she first loved the Moor, but for\
\ bragging and telling her fantastical lies. And will she love him still\
\ for prating? Let not thy discreet heart think it. Her eye must be fed;\
\ and what delight shall she have to look on the devil? When the blood is made\
\ dull with the act of sport, there should be, again to inflame it and to give\
\ satiety a fresh appetite,\n loveliness in favor, sympathy in years, manners,\
\ and beauties-\n"
- source_sentence: How does the character in the excerpt feel about being dishonored
openly?
sentences:
- "Where the dead corpse of Bassianus lay;\n I wrote the letter that thy father\
\ found, And hid the gold within that letter mention'd, Confederate with\
\ the Queen and her two sons; And what not done, that thou hast cause to rue,\
\ Wherein I had no stroke of mischief in it? I play'd the cheater for thy\
\ father's hand, And, when I had it, drew myself apart And almost broke\
\ my heart with extreme laughter. I pried me through the crevice of a wall,\
\ When, for his hand, he had his two sons' heads; Beheld his tears, and\
\ laugh'd so heartily That both mine eyes were rainy like to his; And when\
\ I told the Empress of this sport, She swooned almost at my pleasing tale,\
\ And for my tidings gave me twenty kisses. GOTH. What, canst thou say all\
\ this and never blush? AARON. Ay, like a black dog, as the saying is. LUCIUS.\
\ Art thou not sorry for these heinous deeds? AARON. Ay, that I had not done\
\ a thousand more. Even now I curse the day- and yet, I think, Few come\
\ within the compass of my curse- Wherein I did not some notorious ill; \
\ As kill a man, or else devise his death; Ravish a maid, or plot the way to\
\ do it; Accuse some innocent, and forswear myself; Set deadly enmity between\
\ two friends; Make poor men's cattle break their necks; Set fire on barns\
\ and hay-stacks in the night, And bid the owners quench them with their tears.\
\ Oft have I digg'd up dead men from their graves, And set them upright\
\ at their dear friends' door Even when their sorrows almost was forgot, \
\ And on their skins, as on the bark of trees,\n Have with my knife carved\
\ in Roman letters\n"
- "MESSALA. Lo, yonder, and Titinius mourning it.\n BRUTUS. Titinius' face is\
\ upward.\n CATO. He is slain. BRUTUS. O Julius Caesar, thou art mighty yet!\
\ Thy spirit walks abroad, and turns our swords In our own proper entrails.\
\ Low alarums. CATO. Brave Titinius! Look whe'er he have\
\ not crown'd dead Cassius! BRUTUS. Are yet two Romans living such as these?\
\ The last of all the Romans, fare thee well! It is impossible that ever\
\ Rome Should breed thy fellow. Friends, I owe moe tears To this dead man\
\ than you shall see me pay. I shall find time, Cassius, I shall find time.\
\ Come therefore, and to Thasos send his body; His funerals shall not be\
\ in our camp, Lest it discomfort us. Lucilius, come, And come, young Cato;\
\ let us to the field. Labio and Flavio, set our battles on. 'Tis three\
\ o'clock, and Romans, yet ere night We shall try fortune in a second fight.\
\ Exeunt.\n"
- "TITUS. Prince Bassianus, leave to plead my deeds.\n 'Tis thou and those that\
\ have dishonoured me. Rome and the righteous heavens be my judge How I\
\ have lov'd and honoured Saturnine! TAMORA. My worthy lord, if ever Tamora \
\ Were gracious in those princely eyes of thine, Then hear me speak indifferently\
\ for all; And at my suit, sweet, pardon what is past. SATURNINUS. What, madam!\
\ be dishonoured openly, And basely put it up without revenge? TAMORA. Not\
\ so, my lord; the gods of Rome forfend I should be author to dishonour you!\
\ But on mine honour dare I undertake For good Lord Titus' innocence in\
\ all, Whose fury not dissembled speaks his griefs. Then at my suit look\
\ graciously on him; Lose not so noble a friend on vain suppose, Nor with\
\ sour looks afflict his gentle heart. [Aside to SATURNINUS] My lord, be rul'd\
\ by me, be won at last; Dissemble all your griefs and discontents. \
\ You are but newly planted in your throne; Lest, then, the people, and patricians\
\ too, Upon a just survey take Titus' part, And so supplant you for ingratitude,\
\ Which Rome reputes to be a heinous sin, Yield at entreats, and then\
\ let me alone: I'll find a day to massacre them all, And raze their faction\
\ and their family, The cruel father and his traitorous sons, To whom I\
\ sued for my dear son's life; And make them know what 'tis to let a queen\
\ Kneel in the streets and beg for grace in vain.- Come, come, sweet Emperor;\
\ come, Andronicus. Take up this good old man, and cheer the heart\n That\
\ dies in tempest of thy angry frown.\n"
- source_sentence: King Henry V is preparing for an expedition to France to seek revenge
on the Dauphin for mocking him, and he urges his lords to quickly gather resources
and support for the impending war.
sentences:
- "LEWIS. The sun of heaven, methought, was loath to set,\n But stay'd and made\
\ the western welkin blush, When English measure backward their own ground\
\ In faint retire. O, bravely came we off, When with a volley of our needless\
\ shot, After such bloody toil, we bid good night; And wound our tott'ring\
\ colours clearly up, Last in the field and almost lords of it!\n \
\ Enter a MESSENGER MESSENGER. Where is my prince, the Dauphin?\n LEWIS.\
\ Here; what news? MESSENGER. The Count Melun is slain; the English lords \
\ By his persuasion are again fall'n off, And your supply, which you have wish'd\
\ so long, Are cast away and sunk on Goodwin Sands. LEWIS. Ah, foul shrewd\
\ news! Beshrew thy very heart!\n I did not think to be so sad to-night\n\
\ As this hath made me. Who was he that said King John did fly an hour or\
\ two before The stumbling night did part our weary pow'rs? MESSENGER. Whoever\
\ spoke it, it is true, my lord. LEWIS. keep good quarter and good care to-night;\
\ The day shall not be up so soon as I To try the fair adventure of to-morrow.\
\ Exeunt\n"
- "And that great minds, of partial indulgence\n To their benumbed wills, resist\
\ the same; There is a law in each well-order'd nation To curb those raging\
\ appetites that are Most disobedient and refractory. If Helen, then, be\
\ wife to Sparta's king- As it is known she is-these moral laws Of nature\
\ and of nations speak aloud To have her back return'd. Thus to persist \
\ In doing wrong extenuates not wrong, But makes it much more heavy. Hector's\
\ opinion Is this, in way of truth. Yet, ne'er the less, My spritely brethren,\
\ I propend to you In resolution to keep Helen still; For 'tis a cause that\
\ hath no mean dependence Upon our joint and several dignities. TROILUS. Why,\
\ there you touch'd the life of our design. Were it not glory that we more\
\ affected Than the performance of our heaving spleens, I would not wish\
\ a drop of Troyan blood Spent more in her defence. But, worthy Hector, \
\ She is a theme of honour and renown, A spur to valiant and magnanimous deeds,\
\ Whose present courage may beat down our foes, And fame in time to come\
\ canonize us; For I presume brave Hector would not lose So rich advantage\
\ of a promis'd glory As smiles upon the forehead of this action For the\
\ wide world's revenue. HECTOR. I am yours, You valiant offspring of great\
\ Priamus. I have a roisting challenge sent amongst The dull and factious\
\ nobles of the Greeks Will strike amazement to their drowsy spirits. I\
\ was advertis'd their great general slept,\n Whilst emulation in the army\
\ crept.\n This, I presume, will wake him. Exeunt\n"
- "That shall fly with them; for many a thousand widows\n Shall this his mock\
\ mock of their dear husbands; Mock mothers from their sons, mock castles down;\
\ And some are yet ungotten and unborn That shall have cause to curse the\
\ Dauphin's scorn. But this lies all within the will of God, To whom I do\
\ appeal; and in whose name, Tell you the Dauphin, I am coming on, To venge\
\ me as I may and to put forth My rightful hand in a well-hallow'd cause. \
\ So get you hence in peace; and tell the Dauphin His jest will savour but\
\ of shallow wit, When thousands weep more than did laugh at it. Convey\
\ them with safe conduct. Fare you well. \
\ Exeunt AMBASSADORS EXETER. This was a merry message. KING HENRY. We\
\ hope to make the sender blush at it. Therefore, my lords, omit no happy hour\
\ That may give furth'rance to our expedition; For we have now no thought\
\ in us but France, Save those to God, that run before our business. Therefore\
\ let our proportions for these wars Be soon collected, and all things thought\
\ upon That may with reasonable swiftness ad More feathers to our wings;\
\ for, God before, We'll chide this Dauphin at his father's door. Therefore\
\ let every man now task his thought That this fair action may on foot be brought.\
\ Exeunt\n"
model-index:
- name: SentenceTransformer based on BAAI/bge-m3
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.3822762814943527
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5234578627280626
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5825369244135534
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.656385751520417
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3822762814943527
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1744859542426875
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11650738488271069
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06563857515204169
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.3822762814943527
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5234578627280626
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5825369244135534
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.656385751520417
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5142429769745347
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.46940583067863767
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4766478781892753
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.3822762814943527
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.5234578627280626
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.5825369244135534
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.656385751520417
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.3822762814943527
name: Dot Precision@1
- type: dot_precision@3
value: 0.1744859542426875
name: Dot Precision@3
- type: dot_precision@5
value: 0.11650738488271069
name: Dot Precision@5
- type: dot_precision@10
value: 0.06563857515204169
name: Dot Precision@10
- type: dot_recall@1
value: 0.3822762814943527
name: Dot Recall@1
- type: dot_recall@3
value: 0.5234578627280626
name: Dot Recall@3
- type: dot_recall@5
value: 0.5825369244135534
name: Dot Recall@5
- type: dot_recall@10
value: 0.656385751520417
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5142429769745347
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.46940583067863767
name: Dot Mrr@10
- type: dot_map@100
value: 0.4766478781892753
name: Dot Map@100
---
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rjnClarke/bgem3-shakespeare_st_3")
# Run inference
sentences = [
'King Henry V is preparing for an expedition to France to seek revenge on the Dauphin for mocking him, and he urges his lords to quickly gather resources and support for the impending war.',
"That shall fly with them; for many a thousand widows\n Shall this his mock mock of their dear husbands; Mock mothers from their sons, mock castles down; And some are yet ungotten and unborn That shall have cause to curse the Dauphin's scorn. But this lies all within the will of God, To whom I do appeal; and in whose name, Tell you the Dauphin, I am coming on, To venge me as I may and to put forth My rightful hand in a well-hallow'd cause. So get you hence in peace; and tell the Dauphin His jest will savour but of shallow wit, When thousands weep more than did laugh at it. Convey them with safe conduct. Fare you well. Exeunt AMBASSADORS EXETER. This was a merry message. KING HENRY. We hope to make the sender blush at it. Therefore, my lords, omit no happy hour That may give furth'rance to our expedition; For we have now no thought in us but France, Save those to God, that run before our business. Therefore let our proportions for these wars Be soon collected, and all things thought upon That may with reasonable swiftness ad More feathers to our wings; for, God before, We'll chide this Dauphin at his father's door. Therefore let every man now task his thought That this fair action may on foot be brought. Exeunt\n",
"And that great minds, of partial indulgence\n To their benumbed wills, resist the same; There is a law in each well-order'd nation To curb those raging appetites that are Most disobedient and refractory. If Helen, then, be wife to Sparta's king- As it is known she is-these moral laws Of nature and of nations speak aloud To have her back return'd. Thus to persist In doing wrong extenuates not wrong, But makes it much more heavy. Hector's opinion Is this, in way of truth. Yet, ne'er the less, My spritely brethren, I propend to you In resolution to keep Helen still; For 'tis a cause that hath no mean dependence Upon our joint and several dignities. TROILUS. Why, there you touch'd the life of our design. Were it not glory that we more affected Than the performance of our heaving spleens, I would not wish a drop of Troyan blood Spent more in her defence. But, worthy Hector, She is a theme of honour and renown, A spur to valiant and magnanimous deeds, Whose present courage may beat down our foes, And fame in time to come canonize us; For I presume brave Hector would not lose So rich advantage of a promis'd glory As smiles upon the forehead of this action For the wide world's revenue. HECTOR. I am yours, You valiant offspring of great Priamus. I have a roisting challenge sent amongst The dull and factious nobles of the Greeks Will strike amazement to their drowsy spirits. I was advertis'd their great general slept,\n Whilst emulation in the army crept.\n This, I presume, will wake him. Exeunt\n",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.3823 |
| cosine_accuracy@3 | 0.5235 |
| cosine_accuracy@5 | 0.5825 |
| cosine_accuracy@10 | 0.6564 |
| cosine_precision@1 | 0.3823 |
| cosine_precision@3 | 0.1745 |
| cosine_precision@5 | 0.1165 |
| cosine_precision@10 | 0.0656 |
| cosine_recall@1 | 0.3823 |
| cosine_recall@3 | 0.5235 |
| cosine_recall@5 | 0.5825 |
| cosine_recall@10 | 0.6564 |
| cosine_ndcg@10 | 0.5142 |
| cosine_mrr@10 | 0.4694 |
| **cosine_map@100** | **0.4766** |
| dot_accuracy@1 | 0.3823 |
| dot_accuracy@3 | 0.5235 |
| dot_accuracy@5 | 0.5825 |
| dot_accuracy@10 | 0.6564 |
| dot_precision@1 | 0.3823 |
| dot_precision@3 | 0.1745 |
| dot_precision@5 | 0.1165 |
| dot_precision@10 | 0.0656 |
| dot_recall@1 | 0.3823 |
| dot_recall@3 | 0.5235 |
| dot_recall@5 | 0.5825 |
| dot_recall@10 | 0.6564 |
| dot_ndcg@10 | 0.5142 |
| dot_mrr@10 | 0.4694 |
| dot_map@100 | 0.4766 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,352 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 26.13 tokens</li><li>max: 71 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 408.21 tokens</li><li>max: 610 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:---------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Who is trying to convince Coriolanus to have mercy on Rome and its citizens?</code> | <code>Enter CORIOLANUS with AUFIDIUS CORIOLANUS. What's the matter?<br> MENENIUS. Now, you companion, I'll say an errand for you; you shall know now that I am in estimation; you shall perceive that a Jack guardant cannot office me from my son Coriolanus. Guess but by my entertainment with him if thou stand'st not i' th' state of hanging, or of some death more long in spectatorship and crueller in suffering; behold now presently, and swoon for what's to come upon thee. The glorious gods sit in hourly synod about thy particular prosperity, and love thee no worse than thy old father Menenius does! O my son! my son! thou art preparing fire for us; look thee, here's water to quench it. I was hardly moved to come to thee; but being assured none but myself could move thee, I have been blown out of your gates with sighs, and conjure thee to pardon Rome and thy petitionary countrymen. The good gods assuage thy wrath, and turn the dregs of it upon this varlet here; this, who, like a block, hath denied my access to thee. CORIOLANUS. Away! MENENIUS. How! away! CORIOLANUS. Wife, mother, child, I know not. My affairs Are servanted to others. Though I owe My revenge properly, my remission lies In Volscian breasts. That we have been familiar, Ingrate forgetfulness shall poison rather Than pity note how much. Therefore be gone. Mine ears against your suits are stronger than Your gates against my force. Yet, for I lov'd thee, Take this along; I writ it for thy sake [Gives a letter] And would have sent it. Another word, Menenius,<br> I will not hear thee speak. This man, Aufidius,<br></code> |
| <code>The English nobility receive sad tidings of losses in France and the need for action.</code> | <code>Sad tidings bring I to you out of France,<br> Of loss, of slaughter, and discomfiture: Guienne, Champagne, Rheims, Orleans, Paris, Guysors, Poictiers, are all quite lost. BEDFORD. What say'st thou, man, before dead Henry's corse? Speak softly, or the loss of those great towns Will make him burst his lead and rise from death. GLOUCESTER. Is Paris lost? Is Rouen yielded up? If Henry were recall'd to life again, These news would cause him once more yield the ghost. EXETER. How were they lost? What treachery was us'd? MESSENGER. No treachery, but want of men and money. Amongst the soldiers this is muttered That here you maintain several factions; And whilst a field should be dispatch'd and fought, You are disputing of your generals: One would have ling'ring wars, with little cost; Another would fly swift, but wanteth wings; A third thinks, without expense at all, By guileful fair words peace may be obtain'd. Awake, awake, English nobility! Let not sloth dim your honours, new-begot. Cropp'd are the flower-de-luces in your arms; Of England's coat one half is cut away. EXETER. Were our tears wanting to this funeral, These tidings would call forth their flowing tides. BEDFORD. Me they concern; Regent I am of France. Give me my steeled coat; I'll fight for France. Away with these disgraceful wailing robes! Wounds will I lend the French instead of eyes, To weep their intermissive miseries.<br> Enter a second MESSENGER SECOND MESSENGER. Lords, view these letters full of bad<br> mischance.<br></code> |
| <code>What are the main locations where the characters are headed for battle?</code> | <code>I may dispose of him.<br> King. With all my heart. Prince. Then brother John of Lancaster, to you This honourable bounty shall belong. Go to the Douglas and deliver him Up to his pleasure, ransomless and free. His valour shown upon our crests today Hath taught us how to cherish such high deeds, Even in the bosom of our adversaries. John. I thank your Grace for this high courtesy, Which I shall give away immediately. King. Then this remains, that we divide our power. You, son John, and my cousin Westmoreland, Towards York shall bend you with your dearest speed To meet Northumberland and the prelate Scroop, Who, as we hear, are busily in arms. Myself and you, son Harry, will towards Wales To fight with Glendower and the Earl of March. Rebellion in this laud shall lose his sway, Meeting the check of such another day; And since this business so fair is done, Let us not leave till all our own be won. Exeunt.<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | cosine_map@100 |
|:------:|:----:|:-------------:|:--------------:|
| 0.3864 | 500 | 0.5974 | - |
| 0.7728 | 1000 | 0.5049 | - |
| 1.0 | 1294 | - | 0.4475 |
| 1.1592 | 1500 | 0.4202 | - |
| 1.5456 | 2000 | 0.2689 | - |
| 1.9320 | 2500 | 0.2452 | - |
| 2.0 | 2588 | - | 0.4758 |
| 2.3184 | 3000 | 0.17 | - |
| 2.7048 | 3500 | 0.1301 | - |
| 3.0 | 3882 | - | 0.4766 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] | Non_BioNLP |
thenlper/gte-small | thenlper | sentence-similarity | [
"sentence-transformers",
"pytorch",
"tf",
"coreml",
"onnx",
"safetensors",
"openvino",
"bert",
"mteb",
"sentence-similarity",
"Sentence Transformers",
"en",
"arxiv:2308.03281",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,690 | 1,731 | 3,841,887 | 152 | ---
language:
- en
license: mit
tags:
- mteb
- sentence-similarity
- sentence-transformers
- Sentence Transformers
model-index:
- name: gte-small
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.22388059701493
- type: ap
value: 36.09895941426988
- type: f1
value: 67.3205651539195
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.81894999999999
- type: ap
value: 88.5240138417305
- type: f1
value: 91.80367382706962
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.032
- type: f1
value: 47.4490665674719
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.725
- type: map_at_10
value: 46.604
- type: map_at_100
value: 47.535
- type: map_at_1000
value: 47.538000000000004
- type: map_at_3
value: 41.833
- type: map_at_5
value: 44.61
- type: mrr_at_1
value: 31.223
- type: mrr_at_10
value: 46.794000000000004
- type: mrr_at_100
value: 47.725
- type: mrr_at_1000
value: 47.727000000000004
- type: mrr_at_3
value: 42.07
- type: mrr_at_5
value: 44.812000000000005
- type: ndcg_at_1
value: 30.725
- type: ndcg_at_10
value: 55.440999999999995
- type: ndcg_at_100
value: 59.134
- type: ndcg_at_1000
value: 59.199
- type: ndcg_at_3
value: 45.599000000000004
- type: ndcg_at_5
value: 50.637
- type: precision_at_1
value: 30.725
- type: precision_at_10
value: 8.364
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.848000000000003
- type: precision_at_5
value: 13.77
- type: recall_at_1
value: 30.725
- type: recall_at_10
value: 83.64200000000001
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.543
- type: recall_at_5
value: 68.848
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.90178078197678
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.25728393431922
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.720297062897764
- type: mrr
value: 75.24139295607439
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.43527309184616
- type: cos_sim_spearman
value: 88.17128615100206
- type: euclidean_pearson
value: 87.89922623089282
- type: euclidean_spearman
value: 87.96104039655451
- type: manhattan_pearson
value: 87.9818290932077
- type: manhattan_spearman
value: 88.00923426576885
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.0844155844156
- type: f1
value: 84.01485017302213
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.36574769259432
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.4857033165287
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.261
- type: map_at_10
value: 42.419000000000004
- type: map_at_100
value: 43.927
- type: map_at_1000
value: 44.055
- type: map_at_3
value: 38.597
- type: map_at_5
value: 40.701
- type: mrr_at_1
value: 36.91
- type: mrr_at_10
value: 48.02
- type: mrr_at_100
value: 48.658
- type: mrr_at_1000
value: 48.708
- type: mrr_at_3
value: 44.945
- type: mrr_at_5
value: 46.705000000000005
- type: ndcg_at_1
value: 36.91
- type: ndcg_at_10
value: 49.353
- type: ndcg_at_100
value: 54.456
- type: ndcg_at_1000
value: 56.363
- type: ndcg_at_3
value: 43.483
- type: ndcg_at_5
value: 46.150999999999996
- type: precision_at_1
value: 36.91
- type: precision_at_10
value: 9.700000000000001
- type: precision_at_100
value: 1.557
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 21.078
- type: precision_at_5
value: 15.421999999999999
- type: recall_at_1
value: 30.261
- type: recall_at_10
value: 63.242
- type: recall_at_100
value: 84.09100000000001
- type: recall_at_1000
value: 96.143
- type: recall_at_3
value: 46.478
- type: recall_at_5
value: 53.708
- type: map_at_1
value: 31.145
- type: map_at_10
value: 40.996
- type: map_at_100
value: 42.266999999999996
- type: map_at_1000
value: 42.397
- type: map_at_3
value: 38.005
- type: map_at_5
value: 39.628
- type: mrr_at_1
value: 38.344
- type: mrr_at_10
value: 46.827000000000005
- type: mrr_at_100
value: 47.446
- type: mrr_at_1000
value: 47.489
- type: mrr_at_3
value: 44.448
- type: mrr_at_5
value: 45.747
- type: ndcg_at_1
value: 38.344
- type: ndcg_at_10
value: 46.733000000000004
- type: ndcg_at_100
value: 51.103
- type: ndcg_at_1000
value: 53.075
- type: ndcg_at_3
value: 42.366
- type: ndcg_at_5
value: 44.242
- type: precision_at_1
value: 38.344
- type: precision_at_10
value: 8.822000000000001
- type: precision_at_100
value: 1.417
- type: precision_at_1000
value: 0.187
- type: precision_at_3
value: 20.403
- type: precision_at_5
value: 14.306
- type: recall_at_1
value: 31.145
- type: recall_at_10
value: 56.909
- type: recall_at_100
value: 75.274
- type: recall_at_1000
value: 87.629
- type: recall_at_3
value: 43.784
- type: recall_at_5
value: 49.338
- type: map_at_1
value: 38.83
- type: map_at_10
value: 51.553000000000004
- type: map_at_100
value: 52.581
- type: map_at_1000
value: 52.638
- type: map_at_3
value: 48.112
- type: map_at_5
value: 50.095
- type: mrr_at_1
value: 44.513999999999996
- type: mrr_at_10
value: 54.998000000000005
- type: mrr_at_100
value: 55.650999999999996
- type: mrr_at_1000
value: 55.679
- type: mrr_at_3
value: 52.602000000000004
- type: mrr_at_5
value: 53.931
- type: ndcg_at_1
value: 44.513999999999996
- type: ndcg_at_10
value: 57.67400000000001
- type: ndcg_at_100
value: 61.663999999999994
- type: ndcg_at_1000
value: 62.743
- type: ndcg_at_3
value: 51.964
- type: ndcg_at_5
value: 54.773
- type: precision_at_1
value: 44.513999999999996
- type: precision_at_10
value: 9.423
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 23.323
- type: precision_at_5
value: 16.163
- type: recall_at_1
value: 38.83
- type: recall_at_10
value: 72.327
- type: recall_at_100
value: 89.519
- type: recall_at_1000
value: 97.041
- type: recall_at_3
value: 57.206
- type: recall_at_5
value: 63.88399999999999
- type: map_at_1
value: 25.484
- type: map_at_10
value: 34.527
- type: map_at_100
value: 35.661
- type: map_at_1000
value: 35.739
- type: map_at_3
value: 32.199
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 27.458
- type: mrr_at_10
value: 36.543
- type: mrr_at_100
value: 37.482
- type: mrr_at_1000
value: 37.543
- type: mrr_at_3
value: 34.256
- type: mrr_at_5
value: 35.618
- type: ndcg_at_1
value: 27.458
- type: ndcg_at_10
value: 39.396
- type: ndcg_at_100
value: 44.742
- type: ndcg_at_1000
value: 46.708
- type: ndcg_at_3
value: 34.817
- type: ndcg_at_5
value: 37.247
- type: precision_at_1
value: 27.458
- type: precision_at_10
value: 5.976999999999999
- type: precision_at_100
value: 0.907
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 14.878
- type: precision_at_5
value: 10.35
- type: recall_at_1
value: 25.484
- type: recall_at_10
value: 52.317
- type: recall_at_100
value: 76.701
- type: recall_at_1000
value: 91.408
- type: recall_at_3
value: 40.043
- type: recall_at_5
value: 45.879
- type: map_at_1
value: 16.719
- type: map_at_10
value: 25.269000000000002
- type: map_at_100
value: 26.442
- type: map_at_1000
value: 26.557
- type: map_at_3
value: 22.56
- type: map_at_5
value: 24.082
- type: mrr_at_1
value: 20.896
- type: mrr_at_10
value: 29.982999999999997
- type: mrr_at_100
value: 30.895
- type: mrr_at_1000
value: 30.961
- type: mrr_at_3
value: 27.239
- type: mrr_at_5
value: 28.787000000000003
- type: ndcg_at_1
value: 20.896
- type: ndcg_at_10
value: 30.814000000000004
- type: ndcg_at_100
value: 36.418
- type: ndcg_at_1000
value: 39.182
- type: ndcg_at_3
value: 25.807999999999996
- type: ndcg_at_5
value: 28.143
- type: precision_at_1
value: 20.896
- type: precision_at_10
value: 5.821
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 12.562000000000001
- type: precision_at_5
value: 9.254
- type: recall_at_1
value: 16.719
- type: recall_at_10
value: 43.155
- type: recall_at_100
value: 67.831
- type: recall_at_1000
value: 87.617
- type: recall_at_3
value: 29.259
- type: recall_at_5
value: 35.260999999999996
- type: map_at_1
value: 29.398999999999997
- type: map_at_10
value: 39.876
- type: map_at_100
value: 41.205999999999996
- type: map_at_1000
value: 41.321999999999996
- type: map_at_3
value: 36.588
- type: map_at_5
value: 38.538
- type: mrr_at_1
value: 35.9
- type: mrr_at_10
value: 45.528
- type: mrr_at_100
value: 46.343
- type: mrr_at_1000
value: 46.388
- type: mrr_at_3
value: 42.862
- type: mrr_at_5
value: 44.440000000000005
- type: ndcg_at_1
value: 35.9
- type: ndcg_at_10
value: 45.987
- type: ndcg_at_100
value: 51.370000000000005
- type: ndcg_at_1000
value: 53.400000000000006
- type: ndcg_at_3
value: 40.841
- type: ndcg_at_5
value: 43.447
- type: precision_at_1
value: 35.9
- type: precision_at_10
value: 8.393
- type: precision_at_100
value: 1.283
- type: precision_at_1000
value: 0.166
- type: precision_at_3
value: 19.538
- type: precision_at_5
value: 13.975000000000001
- type: recall_at_1
value: 29.398999999999997
- type: recall_at_10
value: 58.361
- type: recall_at_100
value: 81.081
- type: recall_at_1000
value: 94.004
- type: recall_at_3
value: 43.657000000000004
- type: recall_at_5
value: 50.519999999999996
- type: map_at_1
value: 21.589
- type: map_at_10
value: 31.608999999999998
- type: map_at_100
value: 33.128
- type: map_at_1000
value: 33.247
- type: map_at_3
value: 28.671999999999997
- type: map_at_5
value: 30.233999999999998
- type: mrr_at_1
value: 26.712000000000003
- type: mrr_at_10
value: 36.713
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.771
- type: mrr_at_3
value: 34.075
- type: mrr_at_5
value: 35.451
- type: ndcg_at_1
value: 26.712000000000003
- type: ndcg_at_10
value: 37.519999999999996
- type: ndcg_at_100
value: 43.946000000000005
- type: ndcg_at_1000
value: 46.297
- type: ndcg_at_3
value: 32.551
- type: ndcg_at_5
value: 34.660999999999994
- type: precision_at_1
value: 26.712000000000003
- type: precision_at_10
value: 7.066
- type: precision_at_100
value: 1.216
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 15.906
- type: precision_at_5
value: 11.437999999999999
- type: recall_at_1
value: 21.589
- type: recall_at_10
value: 50.090999999999994
- type: recall_at_100
value: 77.43900000000001
- type: recall_at_1000
value: 93.35900000000001
- type: recall_at_3
value: 36.028999999999996
- type: recall_at_5
value: 41.698
- type: map_at_1
value: 25.121666666666663
- type: map_at_10
value: 34.46258333333334
- type: map_at_100
value: 35.710499999999996
- type: map_at_1000
value: 35.82691666666666
- type: map_at_3
value: 31.563249999999996
- type: map_at_5
value: 33.189750000000004
- type: mrr_at_1
value: 29.66441666666667
- type: mrr_at_10
value: 38.5455
- type: mrr_at_100
value: 39.39566666666667
- type: mrr_at_1000
value: 39.45325
- type: mrr_at_3
value: 36.003333333333345
- type: mrr_at_5
value: 37.440916666666666
- type: ndcg_at_1
value: 29.66441666666667
- type: ndcg_at_10
value: 39.978416666666675
- type: ndcg_at_100
value: 45.278666666666666
- type: ndcg_at_1000
value: 47.52275
- type: ndcg_at_3
value: 35.00058333333334
- type: ndcg_at_5
value: 37.34908333333333
- type: precision_at_1
value: 29.66441666666667
- type: precision_at_10
value: 7.094500000000001
- type: precision_at_100
value: 1.1523333333333332
- type: precision_at_1000
value: 0.15358333333333332
- type: precision_at_3
value: 16.184166666666663
- type: precision_at_5
value: 11.6005
- type: recall_at_1
value: 25.121666666666663
- type: recall_at_10
value: 52.23975000000001
- type: recall_at_100
value: 75.48408333333333
- type: recall_at_1000
value: 90.95316666666668
- type: recall_at_3
value: 38.38458333333333
- type: recall_at_5
value: 44.39933333333333
- type: map_at_1
value: 23.569000000000003
- type: map_at_10
value: 30.389
- type: map_at_100
value: 31.396
- type: map_at_1000
value: 31.493
- type: map_at_3
value: 28.276
- type: map_at_5
value: 29.459000000000003
- type: mrr_at_1
value: 26.534000000000002
- type: mrr_at_10
value: 33.217999999999996
- type: mrr_at_100
value: 34.054
- type: mrr_at_1000
value: 34.12
- type: mrr_at_3
value: 31.058000000000003
- type: mrr_at_5
value: 32.330999999999996
- type: ndcg_at_1
value: 26.534000000000002
- type: ndcg_at_10
value: 34.608
- type: ndcg_at_100
value: 39.391999999999996
- type: ndcg_at_1000
value: 41.837999999999994
- type: ndcg_at_3
value: 30.564999999999998
- type: ndcg_at_5
value: 32.509
- type: precision_at_1
value: 26.534000000000002
- type: precision_at_10
value: 5.414
- type: precision_at_100
value: 0.847
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 12.986
- type: precision_at_5
value: 9.202
- type: recall_at_1
value: 23.569000000000003
- type: recall_at_10
value: 44.896
- type: recall_at_100
value: 66.476
- type: recall_at_1000
value: 84.548
- type: recall_at_3
value: 33.79
- type: recall_at_5
value: 38.512
- type: map_at_1
value: 16.36
- type: map_at_10
value: 23.57
- type: map_at_100
value: 24.698999999999998
- type: map_at_1000
value: 24.834999999999997
- type: map_at_3
value: 21.093
- type: map_at_5
value: 22.418
- type: mrr_at_1
value: 19.718
- type: mrr_at_10
value: 27.139999999999997
- type: mrr_at_100
value: 28.097
- type: mrr_at_1000
value: 28.177999999999997
- type: mrr_at_3
value: 24.805
- type: mrr_at_5
value: 26.121
- type: ndcg_at_1
value: 19.718
- type: ndcg_at_10
value: 28.238999999999997
- type: ndcg_at_100
value: 33.663
- type: ndcg_at_1000
value: 36.763
- type: ndcg_at_3
value: 23.747
- type: ndcg_at_5
value: 25.796000000000003
- type: precision_at_1
value: 19.718
- type: precision_at_10
value: 5.282
- type: precision_at_100
value: 0.9390000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 11.264000000000001
- type: precision_at_5
value: 8.341
- type: recall_at_1
value: 16.36
- type: recall_at_10
value: 38.669
- type: recall_at_100
value: 63.184
- type: recall_at_1000
value: 85.33800000000001
- type: recall_at_3
value: 26.214
- type: recall_at_5
value: 31.423000000000002
- type: map_at_1
value: 25.618999999999996
- type: map_at_10
value: 34.361999999999995
- type: map_at_100
value: 35.534
- type: map_at_1000
value: 35.634
- type: map_at_3
value: 31.402
- type: map_at_5
value: 32.815
- type: mrr_at_1
value: 30.037000000000003
- type: mrr_at_10
value: 38.284
- type: mrr_at_100
value: 39.141999999999996
- type: mrr_at_1000
value: 39.2
- type: mrr_at_3
value: 35.603
- type: mrr_at_5
value: 36.867
- type: ndcg_at_1
value: 30.037000000000003
- type: ndcg_at_10
value: 39.87
- type: ndcg_at_100
value: 45.243
- type: ndcg_at_1000
value: 47.507
- type: ndcg_at_3
value: 34.371
- type: ndcg_at_5
value: 36.521
- type: precision_at_1
value: 30.037000000000003
- type: precision_at_10
value: 6.819
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 15.392
- type: precision_at_5
value: 10.821
- type: recall_at_1
value: 25.618999999999996
- type: recall_at_10
value: 52.869
- type: recall_at_100
value: 76.395
- type: recall_at_1000
value: 92.19500000000001
- type: recall_at_3
value: 37.943
- type: recall_at_5
value: 43.342999999999996
- type: map_at_1
value: 23.283
- type: map_at_10
value: 32.155
- type: map_at_100
value: 33.724
- type: map_at_1000
value: 33.939
- type: map_at_3
value: 29.018
- type: map_at_5
value: 30.864000000000004
- type: mrr_at_1
value: 28.063
- type: mrr_at_10
value: 36.632
- type: mrr_at_100
value: 37.606
- type: mrr_at_1000
value: 37.671
- type: mrr_at_3
value: 33.992
- type: mrr_at_5
value: 35.613
- type: ndcg_at_1
value: 28.063
- type: ndcg_at_10
value: 38.024
- type: ndcg_at_100
value: 44.292
- type: ndcg_at_1000
value: 46.818
- type: ndcg_at_3
value: 32.965
- type: ndcg_at_5
value: 35.562
- type: precision_at_1
value: 28.063
- type: precision_at_10
value: 7.352
- type: precision_at_100
value: 1.514
- type: precision_at_1000
value: 0.23800000000000002
- type: precision_at_3
value: 15.481
- type: precision_at_5
value: 11.542
- type: recall_at_1
value: 23.283
- type: recall_at_10
value: 49.756
- type: recall_at_100
value: 78.05
- type: recall_at_1000
value: 93.854
- type: recall_at_3
value: 35.408
- type: recall_at_5
value: 42.187000000000005
- type: map_at_1
value: 19.201999999999998
- type: map_at_10
value: 26.826
- type: map_at_100
value: 27.961000000000002
- type: map_at_1000
value: 28.066999999999997
- type: map_at_3
value: 24.237000000000002
- type: map_at_5
value: 25.811
- type: mrr_at_1
value: 20.887
- type: mrr_at_10
value: 28.660000000000004
- type: mrr_at_100
value: 29.660999999999998
- type: mrr_at_1000
value: 29.731
- type: mrr_at_3
value: 26.155
- type: mrr_at_5
value: 27.68
- type: ndcg_at_1
value: 20.887
- type: ndcg_at_10
value: 31.523
- type: ndcg_at_100
value: 37.055
- type: ndcg_at_1000
value: 39.579
- type: ndcg_at_3
value: 26.529000000000003
- type: ndcg_at_5
value: 29.137
- type: precision_at_1
value: 20.887
- type: precision_at_10
value: 5.065
- type: precision_at_100
value: 0.856
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 11.399
- type: precision_at_5
value: 8.392
- type: recall_at_1
value: 19.201999999999998
- type: recall_at_10
value: 44.285000000000004
- type: recall_at_100
value: 69.768
- type: recall_at_1000
value: 88.302
- type: recall_at_3
value: 30.804
- type: recall_at_5
value: 37.039
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.244
- type: map_at_10
value: 18.956
- type: map_at_100
value: 20.674
- type: map_at_1000
value: 20.863
- type: map_at_3
value: 15.923000000000002
- type: map_at_5
value: 17.518
- type: mrr_at_1
value: 25.080999999999996
- type: mrr_at_10
value: 35.94
- type: mrr_at_100
value: 36.969
- type: mrr_at_1000
value: 37.013
- type: mrr_at_3
value: 32.617000000000004
- type: mrr_at_5
value: 34.682
- type: ndcg_at_1
value: 25.080999999999996
- type: ndcg_at_10
value: 26.539
- type: ndcg_at_100
value: 33.601
- type: ndcg_at_1000
value: 37.203
- type: ndcg_at_3
value: 21.695999999999998
- type: ndcg_at_5
value: 23.567
- type: precision_at_1
value: 25.080999999999996
- type: precision_at_10
value: 8.143
- type: precision_at_100
value: 1.5650000000000002
- type: precision_at_1000
value: 0.22300000000000003
- type: precision_at_3
value: 15.983
- type: precision_at_5
value: 12.417
- type: recall_at_1
value: 11.244
- type: recall_at_10
value: 31.457
- type: recall_at_100
value: 55.92
- type: recall_at_1000
value: 76.372
- type: recall_at_3
value: 19.784
- type: recall_at_5
value: 24.857000000000003
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.595
- type: map_at_10
value: 18.75
- type: map_at_100
value: 26.354
- type: map_at_1000
value: 27.912
- type: map_at_3
value: 13.794
- type: map_at_5
value: 16.021
- type: mrr_at_1
value: 65.75
- type: mrr_at_10
value: 73.837
- type: mrr_at_100
value: 74.22800000000001
- type: mrr_at_1000
value: 74.234
- type: mrr_at_3
value: 72.5
- type: mrr_at_5
value: 73.387
- type: ndcg_at_1
value: 52.625
- type: ndcg_at_10
value: 39.101
- type: ndcg_at_100
value: 43.836000000000006
- type: ndcg_at_1000
value: 51.086
- type: ndcg_at_3
value: 44.229
- type: ndcg_at_5
value: 41.555
- type: precision_at_1
value: 65.75
- type: precision_at_10
value: 30.45
- type: precision_at_100
value: 9.81
- type: precision_at_1000
value: 2.045
- type: precision_at_3
value: 48.667
- type: precision_at_5
value: 40.8
- type: recall_at_1
value: 8.595
- type: recall_at_10
value: 24.201
- type: recall_at_100
value: 50.096
- type: recall_at_1000
value: 72.677
- type: recall_at_3
value: 15.212
- type: recall_at_5
value: 18.745
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.565
- type: f1
value: 41.49914329345582
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.60000000000001
- type: map_at_10
value: 76.838
- type: map_at_100
value: 77.076
- type: map_at_1000
value: 77.09
- type: map_at_3
value: 75.545
- type: map_at_5
value: 76.39
- type: mrr_at_1
value: 71.707
- type: mrr_at_10
value: 81.514
- type: mrr_at_100
value: 81.64099999999999
- type: mrr_at_1000
value: 81.645
- type: mrr_at_3
value: 80.428
- type: mrr_at_5
value: 81.159
- type: ndcg_at_1
value: 71.707
- type: ndcg_at_10
value: 81.545
- type: ndcg_at_100
value: 82.477
- type: ndcg_at_1000
value: 82.73899999999999
- type: ndcg_at_3
value: 79.292
- type: ndcg_at_5
value: 80.599
- type: precision_at_1
value: 71.707
- type: precision_at_10
value: 10.035
- type: precision_at_100
value: 1.068
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 30.918
- type: precision_at_5
value: 19.328
- type: recall_at_1
value: 66.60000000000001
- type: recall_at_10
value: 91.353
- type: recall_at_100
value: 95.21
- type: recall_at_1000
value: 96.89999999999999
- type: recall_at_3
value: 85.188
- type: recall_at_5
value: 88.52
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.338
- type: map_at_10
value: 31.752000000000002
- type: map_at_100
value: 33.516
- type: map_at_1000
value: 33.694
- type: map_at_3
value: 27.716
- type: map_at_5
value: 29.67
- type: mrr_at_1
value: 38.117000000000004
- type: mrr_at_10
value: 47.323
- type: mrr_at_100
value: 48.13
- type: mrr_at_1000
value: 48.161
- type: mrr_at_3
value: 45.062000000000005
- type: mrr_at_5
value: 46.358
- type: ndcg_at_1
value: 38.117000000000004
- type: ndcg_at_10
value: 39.353
- type: ndcg_at_100
value: 46.044000000000004
- type: ndcg_at_1000
value: 49.083
- type: ndcg_at_3
value: 35.891
- type: ndcg_at_5
value: 36.661
- type: precision_at_1
value: 38.117000000000004
- type: precision_at_10
value: 11.187999999999999
- type: precision_at_100
value: 1.802
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 24.126
- type: precision_at_5
value: 17.562
- type: recall_at_1
value: 19.338
- type: recall_at_10
value: 45.735
- type: recall_at_100
value: 71.281
- type: recall_at_1000
value: 89.537
- type: recall_at_3
value: 32.525
- type: recall_at_5
value: 37.671
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.995
- type: map_at_10
value: 55.032000000000004
- type: map_at_100
value: 55.86
- type: map_at_1000
value: 55.932
- type: map_at_3
value: 52.125
- type: map_at_5
value: 53.884
- type: mrr_at_1
value: 73.991
- type: mrr_at_10
value: 80.096
- type: mrr_at_100
value: 80.32000000000001
- type: mrr_at_1000
value: 80.331
- type: mrr_at_3
value: 79.037
- type: mrr_at_5
value: 79.719
- type: ndcg_at_1
value: 73.991
- type: ndcg_at_10
value: 63.786
- type: ndcg_at_100
value: 66.78
- type: ndcg_at_1000
value: 68.255
- type: ndcg_at_3
value: 59.501000000000005
- type: ndcg_at_5
value: 61.82299999999999
- type: precision_at_1
value: 73.991
- type: precision_at_10
value: 13.157
- type: precision_at_100
value: 1.552
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 37.519999999999996
- type: precision_at_5
value: 24.351
- type: recall_at_1
value: 36.995
- type: recall_at_10
value: 65.78699999999999
- type: recall_at_100
value: 77.583
- type: recall_at_1000
value: 87.421
- type: recall_at_3
value: 56.279999999999994
- type: recall_at_5
value: 60.878
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 86.80239999999999
- type: ap
value: 81.97305141128378
- type: f1
value: 86.76976305549273
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.166
- type: map_at_10
value: 33.396
- type: map_at_100
value: 34.588
- type: map_at_1000
value: 34.637
- type: map_at_3
value: 29.509999999999998
- type: map_at_5
value: 31.719
- type: mrr_at_1
value: 21.762
- type: mrr_at_10
value: 33.969
- type: mrr_at_100
value: 35.099000000000004
- type: mrr_at_1000
value: 35.141
- type: mrr_at_3
value: 30.148000000000003
- type: mrr_at_5
value: 32.324000000000005
- type: ndcg_at_1
value: 21.776999999999997
- type: ndcg_at_10
value: 40.306999999999995
- type: ndcg_at_100
value: 46.068
- type: ndcg_at_1000
value: 47.3
- type: ndcg_at_3
value: 32.416
- type: ndcg_at_5
value: 36.345
- type: precision_at_1
value: 21.776999999999997
- type: precision_at_10
value: 6.433
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.897
- type: precision_at_5
value: 10.324
- type: recall_at_1
value: 21.166
- type: recall_at_10
value: 61.587
- type: recall_at_100
value: 88.251
- type: recall_at_1000
value: 97.727
- type: recall_at_3
value: 40.196
- type: recall_at_5
value: 49.611
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.04605563155496
- type: f1
value: 92.78007303978372
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.65116279069767
- type: f1
value: 52.75775172527262
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.34633490248822
- type: f1
value: 68.15345065392562
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.63887020847343
- type: f1
value: 76.08074680233685
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.77933406071333
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.06504927238196
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.20682480490871
- type: mrr
value: 33.41462721527003
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.548
- type: map_at_10
value: 13.086999999999998
- type: map_at_100
value: 16.698
- type: map_at_1000
value: 18.151999999999997
- type: map_at_3
value: 9.576
- type: map_at_5
value: 11.175
- type: mrr_at_1
value: 44.272
- type: mrr_at_10
value: 53.635999999999996
- type: mrr_at_100
value: 54.228
- type: mrr_at_1000
value: 54.26499999999999
- type: mrr_at_3
value: 51.754
- type: mrr_at_5
value: 53.086
- type: ndcg_at_1
value: 42.724000000000004
- type: ndcg_at_10
value: 34.769
- type: ndcg_at_100
value: 32.283
- type: ndcg_at_1000
value: 40.843
- type: ndcg_at_3
value: 39.852
- type: ndcg_at_5
value: 37.858999999999995
- type: precision_at_1
value: 44.272
- type: precision_at_10
value: 26.068
- type: precision_at_100
value: 8.328000000000001
- type: precision_at_1000
value: 2.1
- type: precision_at_3
value: 37.874
- type: precision_at_5
value: 33.065
- type: recall_at_1
value: 5.548
- type: recall_at_10
value: 16.936999999999998
- type: recall_at_100
value: 33.72
- type: recall_at_1000
value: 64.348
- type: recall_at_3
value: 10.764999999999999
- type: recall_at_5
value: 13.361
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.008
- type: map_at_10
value: 42.675000000000004
- type: map_at_100
value: 43.85
- type: map_at_1000
value: 43.884
- type: map_at_3
value: 38.286
- type: map_at_5
value: 40.78
- type: mrr_at_1
value: 31.518
- type: mrr_at_10
value: 45.015
- type: mrr_at_100
value: 45.924
- type: mrr_at_1000
value: 45.946999999999996
- type: mrr_at_3
value: 41.348
- type: mrr_at_5
value: 43.428
- type: ndcg_at_1
value: 31.489
- type: ndcg_at_10
value: 50.285999999999994
- type: ndcg_at_100
value: 55.291999999999994
- type: ndcg_at_1000
value: 56.05
- type: ndcg_at_3
value: 41.976
- type: ndcg_at_5
value: 46.103
- type: precision_at_1
value: 31.489
- type: precision_at_10
value: 8.456
- type: precision_at_100
value: 1.125
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 19.09
- type: precision_at_5
value: 13.841000000000001
- type: recall_at_1
value: 28.008
- type: recall_at_10
value: 71.21499999999999
- type: recall_at_100
value: 92.99
- type: recall_at_1000
value: 98.578
- type: recall_at_3
value: 49.604
- type: recall_at_5
value: 59.094
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.351
- type: map_at_10
value: 84.163
- type: map_at_100
value: 84.785
- type: map_at_1000
value: 84.801
- type: map_at_3
value: 81.16
- type: map_at_5
value: 83.031
- type: mrr_at_1
value: 80.96
- type: mrr_at_10
value: 87.241
- type: mrr_at_100
value: 87.346
- type: mrr_at_1000
value: 87.347
- type: mrr_at_3
value: 86.25699999999999
- type: mrr_at_5
value: 86.907
- type: ndcg_at_1
value: 80.97
- type: ndcg_at_10
value: 88.017
- type: ndcg_at_100
value: 89.241
- type: ndcg_at_1000
value: 89.34299999999999
- type: ndcg_at_3
value: 85.053
- type: ndcg_at_5
value: 86.663
- type: precision_at_1
value: 80.97
- type: precision_at_10
value: 13.358
- type: precision_at_100
value: 1.525
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.143
- type: precision_at_5
value: 24.451999999999998
- type: recall_at_1
value: 70.351
- type: recall_at_10
value: 95.39800000000001
- type: recall_at_100
value: 99.55199999999999
- type: recall_at_1000
value: 99.978
- type: recall_at_3
value: 86.913
- type: recall_at_5
value: 91.448
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.62406719814139
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.386700035141736
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.618
- type: map_at_10
value: 12.920000000000002
- type: map_at_100
value: 15.304
- type: map_at_1000
value: 15.656999999999998
- type: map_at_3
value: 9.187
- type: map_at_5
value: 10.937
- type: mrr_at_1
value: 22.8
- type: mrr_at_10
value: 35.13
- type: mrr_at_100
value: 36.239
- type: mrr_at_1000
value: 36.291000000000004
- type: mrr_at_3
value: 31.917
- type: mrr_at_5
value: 33.787
- type: ndcg_at_1
value: 22.8
- type: ndcg_at_10
value: 21.382
- type: ndcg_at_100
value: 30.257
- type: ndcg_at_1000
value: 36.001
- type: ndcg_at_3
value: 20.43
- type: ndcg_at_5
value: 17.622
- type: precision_at_1
value: 22.8
- type: precision_at_10
value: 11.26
- type: precision_at_100
value: 2.405
- type: precision_at_1000
value: 0.377
- type: precision_at_3
value: 19.633
- type: precision_at_5
value: 15.68
- type: recall_at_1
value: 4.618
- type: recall_at_10
value: 22.811999999999998
- type: recall_at_100
value: 48.787000000000006
- type: recall_at_1000
value: 76.63799999999999
- type: recall_at_3
value: 11.952
- type: recall_at_5
value: 15.892000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.01529458252244
- type: cos_sim_spearman
value: 77.92985224770254
- type: euclidean_pearson
value: 81.04251429422487
- type: euclidean_spearman
value: 77.92838490549133
- type: manhattan_pearson
value: 80.95892251458979
- type: manhattan_spearman
value: 77.81028089705941
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 83.97885282534388
- type: cos_sim_spearman
value: 75.1221970851712
- type: euclidean_pearson
value: 80.34455956720097
- type: euclidean_spearman
value: 74.5894274239938
- type: manhattan_pearson
value: 80.38999766325465
- type: manhattan_spearman
value: 74.68524557166975
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.95746064915672
- type: cos_sim_spearman
value: 85.08683458043946
- type: euclidean_pearson
value: 84.56699492836385
- type: euclidean_spearman
value: 85.66089116133713
- type: manhattan_pearson
value: 84.47553323458541
- type: manhattan_spearman
value: 85.56142206781472
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.71377893595067
- type: cos_sim_spearman
value: 81.03453291428589
- type: euclidean_pearson
value: 82.57136298308613
- type: euclidean_spearman
value: 81.15839961890875
- type: manhattan_pearson
value: 82.55157879373837
- type: manhattan_spearman
value: 81.1540163767054
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.64197832372373
- type: cos_sim_spearman
value: 88.31966852492485
- type: euclidean_pearson
value: 87.98692129976983
- type: euclidean_spearman
value: 88.6247340837856
- type: manhattan_pearson
value: 87.90437827826412
- type: manhattan_spearman
value: 88.56278787131457
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.84159950146693
- type: cos_sim_spearman
value: 83.90678384140168
- type: euclidean_pearson
value: 83.19005018860221
- type: euclidean_spearman
value: 84.16260415876295
- type: manhattan_pearson
value: 83.05030612994494
- type: manhattan_spearman
value: 83.99605629718336
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.49935350176666
- type: cos_sim_spearman
value: 87.59086606735383
- type: euclidean_pearson
value: 88.06537181129983
- type: euclidean_spearman
value: 87.6687448086014
- type: manhattan_pearson
value: 87.96599131972935
- type: manhattan_spearman
value: 87.63295748969642
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.68232799482763
- type: cos_sim_spearman
value: 67.99930378085793
- type: euclidean_pearson
value: 68.50275360001696
- type: euclidean_spearman
value: 67.81588179309259
- type: manhattan_pearson
value: 68.5892154749763
- type: manhattan_spearman
value: 67.84357259640682
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.37049618406554
- type: cos_sim_spearman
value: 85.57014313159492
- type: euclidean_pearson
value: 85.57469513908282
- type: euclidean_spearman
value: 85.661948135258
- type: manhattan_pearson
value: 85.36866831229028
- type: manhattan_spearman
value: 85.5043455368843
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 84.83259065376154
- type: mrr
value: 95.58455433455433
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.817
- type: map_at_10
value: 68.459
- type: map_at_100
value: 68.951
- type: map_at_1000
value: 68.979
- type: map_at_3
value: 65.791
- type: map_at_5
value: 67.583
- type: mrr_at_1
value: 61.667
- type: mrr_at_10
value: 69.368
- type: mrr_at_100
value: 69.721
- type: mrr_at_1000
value: 69.744
- type: mrr_at_3
value: 67.278
- type: mrr_at_5
value: 68.611
- type: ndcg_at_1
value: 61.667
- type: ndcg_at_10
value: 72.70100000000001
- type: ndcg_at_100
value: 74.928
- type: ndcg_at_1000
value: 75.553
- type: ndcg_at_3
value: 68.203
- type: ndcg_at_5
value: 70.804
- type: precision_at_1
value: 61.667
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.444000000000003
- type: precision_at_5
value: 17.599999999999998
- type: recall_at_1
value: 58.817
- type: recall_at_10
value: 84.789
- type: recall_at_100
value: 95.0
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 72.8
- type: recall_at_5
value: 79.294
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8108910891089
- type: cos_sim_ap
value: 95.5743678558349
- type: cos_sim_f1
value: 90.43133366385722
- type: cos_sim_precision
value: 89.67551622418878
- type: cos_sim_recall
value: 91.2
- type: dot_accuracy
value: 99.75841584158415
- type: dot_ap
value: 94.00786363627253
- type: dot_f1
value: 87.51910341314316
- type: dot_precision
value: 89.20041536863967
- type: dot_recall
value: 85.9
- type: euclidean_accuracy
value: 99.81485148514851
- type: euclidean_ap
value: 95.4752113136905
- type: euclidean_f1
value: 90.44334975369456
- type: euclidean_precision
value: 89.126213592233
- type: euclidean_recall
value: 91.8
- type: manhattan_accuracy
value: 99.81584158415842
- type: manhattan_ap
value: 95.5163172682464
- type: manhattan_f1
value: 90.51987767584097
- type: manhattan_precision
value: 92.3076923076923
- type: manhattan_recall
value: 88.8
- type: max_accuracy
value: 99.81584158415842
- type: max_ap
value: 95.5743678558349
- type: max_f1
value: 90.51987767584097
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 62.63235986949449
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.334795589585575
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.02955214518782
- type: mrr
value: 52.8004838298956
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.63769566275453
- type: cos_sim_spearman
value: 30.422379185989335
- type: dot_pearson
value: 26.88493071882256
- type: dot_spearman
value: 26.505249740971305
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.21
- type: map_at_10
value: 1.654
- type: map_at_100
value: 10.095
- type: map_at_1000
value: 25.808999999999997
- type: map_at_3
value: 0.594
- type: map_at_5
value: 0.9289999999999999
- type: mrr_at_1
value: 78.0
- type: mrr_at_10
value: 87.019
- type: mrr_at_100
value: 87.019
- type: mrr_at_1000
value: 87.019
- type: mrr_at_3
value: 86.333
- type: mrr_at_5
value: 86.733
- type: ndcg_at_1
value: 73.0
- type: ndcg_at_10
value: 66.52900000000001
- type: ndcg_at_100
value: 53.433
- type: ndcg_at_1000
value: 51.324000000000005
- type: ndcg_at_3
value: 72.02199999999999
- type: ndcg_at_5
value: 69.696
- type: precision_at_1
value: 78.0
- type: precision_at_10
value: 70.39999999999999
- type: precision_at_100
value: 55.46
- type: precision_at_1000
value: 22.758
- type: precision_at_3
value: 76.667
- type: precision_at_5
value: 74.0
- type: recall_at_1
value: 0.21
- type: recall_at_10
value: 1.8849999999999998
- type: recall_at_100
value: 13.801
- type: recall_at_1000
value: 49.649
- type: recall_at_3
value: 0.632
- type: recall_at_5
value: 1.009
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.797
- type: map_at_10
value: 9.01
- type: map_at_100
value: 14.682
- type: map_at_1000
value: 16.336000000000002
- type: map_at_3
value: 4.546
- type: map_at_5
value: 5.9270000000000005
- type: mrr_at_1
value: 24.490000000000002
- type: mrr_at_10
value: 41.156
- type: mrr_at_100
value: 42.392
- type: mrr_at_1000
value: 42.408
- type: mrr_at_3
value: 38.775999999999996
- type: mrr_at_5
value: 40.102
- type: ndcg_at_1
value: 21.429000000000002
- type: ndcg_at_10
value: 22.222
- type: ndcg_at_100
value: 34.405
- type: ndcg_at_1000
value: 46.599000000000004
- type: ndcg_at_3
value: 25.261
- type: ndcg_at_5
value: 22.695999999999998
- type: precision_at_1
value: 24.490000000000002
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.306
- type: precision_at_1000
value: 1.5350000000000001
- type: precision_at_3
value: 27.211000000000002
- type: precision_at_5
value: 22.857
- type: recall_at_1
value: 1.797
- type: recall_at_10
value: 15.706000000000001
- type: recall_at_100
value: 46.412
- type: recall_at_1000
value: 83.159
- type: recall_at_3
value: 6.1370000000000005
- type: recall_at_5
value: 8.599
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.3302
- type: ap
value: 14.169121204575601
- type: f1
value: 54.229345975274235
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 58.22297679683077
- type: f1
value: 58.62984908377875
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.952922428464255
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.68140907194373
- type: cos_sim_ap
value: 70.12180123666836
- type: cos_sim_f1
value: 65.77501791258658
- type: cos_sim_precision
value: 60.07853403141361
- type: cos_sim_recall
value: 72.66490765171504
- type: dot_accuracy
value: 81.92167848840674
- type: dot_ap
value: 60.49837581423469
- type: dot_f1
value: 58.44186046511628
- type: dot_precision
value: 52.24532224532224
- type: dot_recall
value: 66.3060686015831
- type: euclidean_accuracy
value: 84.73505394289802
- type: euclidean_ap
value: 70.3278904593286
- type: euclidean_f1
value: 65.98851124940161
- type: euclidean_precision
value: 60.38107752956636
- type: euclidean_recall
value: 72.74406332453826
- type: manhattan_accuracy
value: 84.73505394289802
- type: manhattan_ap
value: 70.00737738537337
- type: manhattan_f1
value: 65.80150784822642
- type: manhattan_precision
value: 61.892583120204606
- type: manhattan_recall
value: 70.23746701846966
- type: max_accuracy
value: 84.73505394289802
- type: max_ap
value: 70.3278904593286
- type: max_f1
value: 65.98851124940161
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.44258159661582
- type: cos_sim_ap
value: 84.91926704880888
- type: cos_sim_f1
value: 77.07651086632926
- type: cos_sim_precision
value: 74.5894554883319
- type: cos_sim_recall
value: 79.73514012935017
- type: dot_accuracy
value: 85.88116583226608
- type: dot_ap
value: 78.9753854779923
- type: dot_f1
value: 72.17757637979255
- type: dot_precision
value: 66.80647486729143
- type: dot_recall
value: 78.48783492454572
- type: euclidean_accuracy
value: 88.5299025885823
- type: euclidean_ap
value: 85.08006075642194
- type: euclidean_f1
value: 77.29637336504163
- type: euclidean_precision
value: 74.69836253950014
- type: euclidean_recall
value: 80.08161379735141
- type: manhattan_accuracy
value: 88.55124771995187
- type: manhattan_ap
value: 85.00941529932851
- type: manhattan_f1
value: 77.33100233100232
- type: manhattan_precision
value: 73.37572573956317
- type: manhattan_recall
value: 81.73698798891284
- type: max_accuracy
value: 88.55124771995187
- type: max_ap
value: 85.08006075642194
- type: max_f1
value: 77.33100233100232
---
# gte-small
General Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281)
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
## Metrics
We compared the performance of the GTE models with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
| Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 |
| [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 |
| [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 |
| [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 |
| [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 |
| [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 |
| [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 |
## Usage
Code example
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
input_texts = [
"what is the capital of China?",
"how to implement quick sort in python?",
"Beijing",
"sorting algorithms"
]
tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-small")
model = AutoModel.from_pretrained("thenlper/gte-small")
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# (Optionally) normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:1] @ embeddings[1:].T) * 100
print(scores.tolist())
```
Use with sentence-transformers:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
sentences = ['That is a happy person', 'That is a very happy person']
model = SentenceTransformer('thenlper/gte-large')
embeddings = model.encode(sentences)
print(cos_sim(embeddings[0], embeddings[1]))
```
### Limitation
This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
### Citation
If you find our paper or models helpful, please consider citing them as follows:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
sileod/deberta-v3-base-tasksource-nli | sileod | zero-shot-classification | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"deberta-v3-base",
"deberta-v3",
"deberta",
"nli",
"natural-language-inference",
"multitask",
"multi-task",
"pipeline",
"extreme-multi-task",
"extreme-mtl",
"tasksource",
"zero-shot",
"rlhf",
"zero-shot-classification",
"en",
"dataset:glue",
"dataset:nyu-mll/multi_nli",
"dataset:multi_nli",
"dataset:super_glue",
"dataset:anli",
"dataset:tasksource/babi_nli",
"dataset:sick",
"dataset:snli",
"dataset:scitail",
"dataset:OpenAssistant/oasst1",
"dataset:universal_dependencies",
"dataset:hans",
"dataset:qbao775/PARARULE-Plus",
"dataset:alisawuffles/WANLI",
"dataset:metaeval/recast",
"dataset:sileod/probability_words_nli",
"dataset:joey234/nan-nli",
"dataset:pietrolesci/nli_fever",
"dataset:pietrolesci/breaking_nli",
"dataset:pietrolesci/conj_nli",
"dataset:pietrolesci/fracas",
"dataset:pietrolesci/dialogue_nli",
"dataset:pietrolesci/mpe",
"dataset:pietrolesci/dnc",
"dataset:pietrolesci/gpt3_nli",
"dataset:pietrolesci/recast_white",
"dataset:pietrolesci/joci",
"dataset:martn-nguyen/contrast_nli",
"dataset:pietrolesci/robust_nli",
"dataset:pietrolesci/robust_nli_is_sd",
"dataset:pietrolesci/robust_nli_li_ts",
"dataset:pietrolesci/gen_debiased_nli",
"dataset:pietrolesci/add_one_rte",
"dataset:metaeval/imppres",
"dataset:pietrolesci/glue_diagnostics",
"dataset:hlgd",
"dataset:PolyAI/banking77",
"dataset:paws",
"dataset:quora",
"dataset:medical_questions_pairs",
"dataset:conll2003",
"dataset:nlpaueb/finer-139",
"dataset:Anthropic/hh-rlhf",
"dataset:Anthropic/model-written-evals",
"dataset:truthful_qa",
"dataset:nightingal3/fig-qa",
"dataset:tasksource/bigbench",
"dataset:blimp",
"dataset:cos_e",
"dataset:cosmos_qa",
"dataset:dream",
"dataset:openbookqa",
"dataset:qasc",
"dataset:quartz",
"dataset:quail",
"dataset:head_qa",
"dataset:sciq",
"dataset:social_i_qa",
"dataset:wiki_hop",
"dataset:wiqa",
"dataset:piqa",
"dataset:hellaswag",
"dataset:pkavumba/balanced-copa",
"dataset:12ml/e-CARE",
"dataset:art",
"dataset:tasksource/mmlu",
"dataset:winogrande",
"dataset:codah",
"dataset:ai2_arc",
"dataset:definite_pronoun_resolution",
"dataset:swag",
"dataset:math_qa",
"dataset:metaeval/utilitarianism",
"dataset:mteb/amazon_counterfactual",
"dataset:SetFit/insincere-questions",
"dataset:SetFit/toxic_conversations",
"dataset:turingbench/TuringBench",
"dataset:trec",
"dataset:tals/vitaminc",
"dataset:hope_edi",
"dataset:strombergnlp/rumoureval_2019",
"dataset:ethos",
"dataset:tweet_eval",
"dataset:discovery",
"dataset:pragmeval",
"dataset:silicone",
"dataset:lex_glue",
"dataset:papluca/language-identification",
"dataset:imdb",
"dataset:rotten_tomatoes",
"dataset:ag_news",
"dataset:yelp_review_full",
"dataset:financial_phrasebank",
"dataset:poem_sentiment",
"dataset:dbpedia_14",
"dataset:amazon_polarity",
"dataset:app_reviews",
"dataset:hate_speech18",
"dataset:sms_spam",
"dataset:humicroedit",
"dataset:snips_built_in_intents",
"dataset:banking77",
"dataset:hate_speech_offensive",
"dataset:yahoo_answers_topics",
"dataset:pacovaldez/stackoverflow-questions",
"dataset:zapsdcn/hyperpartisan_news",
"dataset:zapsdcn/sciie",
"dataset:zapsdcn/citation_intent",
"dataset:go_emotions",
"dataset:allenai/scicite",
"dataset:liar",
"dataset:relbert/lexical_relation_classification",
"dataset:metaeval/linguisticprobing",
"dataset:tasksource/crowdflower",
"dataset:metaeval/ethics",
"dataset:emo",
"dataset:google_wellformed_query",
"dataset:tweets_hate_speech_detection",
"dataset:has_part",
"dataset:wnut_17",
"dataset:ncbi_disease",
"dataset:acronym_identification",
"dataset:jnlpba",
"dataset:species_800",
"dataset:SpeedOfMagic/ontonotes_english",
"dataset:blog_authorship_corpus",
"dataset:launch/open_question_type",
"dataset:health_fact",
"dataset:commonsense_qa",
"dataset:mc_taco",
"dataset:ade_corpus_v2",
"dataset:prajjwal1/discosense",
"dataset:circa",
"dataset:PiC/phrase_similarity",
"dataset:copenlu/scientific-exaggeration-detection",
"dataset:quarel",
"dataset:mwong/fever-evidence-related",
"dataset:numer_sense",
"dataset:dynabench/dynasent",
"dataset:raquiba/Sarcasm_News_Headline",
"dataset:sem_eval_2010_task_8",
"dataset:demo-org/auditor_review",
"dataset:medmcqa",
"dataset:aqua_rat",
"dataset:RuyuanWan/Dynasent_Disagreement",
"dataset:RuyuanWan/Politeness_Disagreement",
"dataset:RuyuanWan/SBIC_Disagreement",
"dataset:RuyuanWan/SChem_Disagreement",
"dataset:RuyuanWan/Dilemmas_Disagreement",
"dataset:lucasmccabe/logiqa",
"dataset:wiki_qa",
"dataset:metaeval/cycic_classification",
"dataset:metaeval/cycic_multiplechoice",
"dataset:metaeval/sts-companion",
"dataset:metaeval/commonsense_qa_2.0",
"dataset:metaeval/lingnli",
"dataset:metaeval/monotonicity-entailment",
"dataset:metaeval/arct",
"dataset:metaeval/scinli",
"dataset:metaeval/naturallogic",
"dataset:onestop_qa",
"dataset:demelin/moral_stories",
"dataset:corypaik/prost",
"dataset:aps/dynahate",
"dataset:metaeval/syntactic-augmentation-nli",
"dataset:metaeval/autotnli",
"dataset:lasha-nlp/CONDAQA",
"dataset:openai/webgpt_comparisons",
"dataset:Dahoas/synthetic-instruct-gptj-pairwise",
"dataset:metaeval/scruples",
"dataset:metaeval/wouldyourather",
"dataset:sileod/attempto-nli",
"dataset:metaeval/defeasible-nli",
"dataset:metaeval/help-nli",
"dataset:metaeval/nli-veridicality-transitivity",
"dataset:metaeval/natural-language-satisfiability",
"dataset:metaeval/lonli",
"dataset:tasksource/dadc-limit-nli",
"dataset:ColumbiaNLP/FLUTE",
"dataset:metaeval/strategy-qa",
"dataset:openai/summarize_from_feedback",
"dataset:tasksource/folio",
"dataset:metaeval/tomi-nli",
"dataset:metaeval/avicenna",
"dataset:stanfordnlp/SHP",
"dataset:GBaker/MedQA-USMLE-4-options-hf",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:sileod/wikimedqa",
"dataset:declare-lab/cicero",
"dataset:amydeng2000/CREAK",
"dataset:metaeval/mutual",
"dataset:inverse-scaling/NeQA",
"dataset:inverse-scaling/quote-repetition",
"dataset:inverse-scaling/redefine-math",
"dataset:tasksource/puzzte",
"dataset:metaeval/implicatures",
"dataset:race",
"dataset:metaeval/spartqa-yn",
"dataset:metaeval/spartqa-mchoice",
"dataset:metaeval/temporal-nli",
"dataset:metaeval/ScienceQA_text_only",
"dataset:AndyChiang/cloth",
"dataset:metaeval/logiqa-2.0-nli",
"dataset:tasksource/oasst1_dense_flat",
"dataset:metaeval/boolq-natural-perturbations",
"dataset:metaeval/path-naturalness-prediction",
"dataset:riddle_sense",
"dataset:Jiangjie/ekar_english",
"dataset:metaeval/implicit-hate-stg1",
"dataset:metaeval/chaos-mnli-ambiguity",
"dataset:IlyaGusev/headline_cause",
"dataset:metaeval/race-c",
"dataset:metaeval/equate",
"dataset:metaeval/ambient",
"dataset:AndyChiang/dgen",
"dataset:metaeval/clcd-english",
"dataset:civil_comments",
"dataset:metaeval/acceptability-prediction",
"dataset:maximedb/twentyquestions",
"dataset:metaeval/counterfactually-augmented-snli",
"dataset:tasksource/I2D2",
"dataset:sileod/mindgames",
"dataset:metaeval/counterfactually-augmented-imdb",
"dataset:metaeval/cnli",
"dataset:metaeval/reclor",
"dataset:tasksource/oasst1_pairwise_rlhf_reward",
"dataset:tasksource/zero-shot-label-nli",
"dataset:webis/args_me",
"dataset:webis/Touche23-ValueEval",
"dataset:tasksource/starcon",
"dataset:tasksource/ruletaker",
"dataset:lighteval/lsat_qa",
"dataset:tasksource/ConTRoL-nli",
"dataset:tasksource/tracie",
"dataset:tasksource/sherliic",
"dataset:tasksource/sen-making",
"dataset:tasksource/winowhy",
"dataset:mediabiasgroup/mbib-base",
"dataset:tasksource/robustLR",
"dataset:CLUTRR/v1",
"dataset:tasksource/logical-fallacy",
"dataset:tasksource/parade",
"dataset:tasksource/cladder",
"dataset:tasksource/subjectivity",
"dataset:tasksource/MOH",
"dataset:tasksource/VUAC",
"dataset:tasksource/TroFi",
"dataset:sharc_modified",
"dataset:tasksource/conceptrules_v2",
"dataset:tasksource/disrpt",
"dataset:conll2000",
"dataset:DFKI-SLT/few-nerd",
"dataset:tasksource/com2sense",
"dataset:tasksource/scone",
"dataset:tasksource/winodict",
"dataset:tasksource/fool-me-twice",
"dataset:tasksource/monli",
"dataset:tasksource/corr2cause",
"dataset:tasksource/apt",
"dataset:zeroshot/twitter-financial-news-sentiment",
"dataset:tasksource/icl-symbol-tuning-instruct",
"dataset:tasksource/SpaceNLI",
"dataset:sihaochen/propsegment",
"dataset:HannahRoseKirk/HatemojiBuild",
"dataset:tasksource/regset",
"dataset:lmsys/chatbot_arena_conversations",
"dataset:tasksource/nlgraph",
"arxiv:2301.05948",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,673 | 1,723 | 15,941 | 123 | ---
datasets:
- glue
- nyu-mll/multi_nli
- multi_nli
- super_glue
- anli
- tasksource/babi_nli
- sick
- snli
- scitail
- OpenAssistant/oasst1
- universal_dependencies
- hans
- qbao775/PARARULE-Plus
- alisawuffles/WANLI
- metaeval/recast
- sileod/probability_words_nli
- joey234/nan-nli
- pietrolesci/nli_fever
- pietrolesci/breaking_nli
- pietrolesci/conj_nli
- pietrolesci/fracas
- pietrolesci/dialogue_nli
- pietrolesci/mpe
- pietrolesci/dnc
- pietrolesci/gpt3_nli
- pietrolesci/recast_white
- pietrolesci/joci
- martn-nguyen/contrast_nli
- pietrolesci/robust_nli
- pietrolesci/robust_nli_is_sd
- pietrolesci/robust_nli_li_ts
- pietrolesci/gen_debiased_nli
- pietrolesci/add_one_rte
- metaeval/imppres
- pietrolesci/glue_diagnostics
- hlgd
- PolyAI/banking77
- paws
- quora
- medical_questions_pairs
- conll2003
- nlpaueb/finer-139
- Anthropic/hh-rlhf
- Anthropic/model-written-evals
- truthful_qa
- nightingal3/fig-qa
- tasksource/bigbench
- blimp
- cos_e
- cosmos_qa
- dream
- openbookqa
- qasc
- quartz
- quail
- head_qa
- sciq
- social_i_qa
- wiki_hop
- wiqa
- piqa
- hellaswag
- pkavumba/balanced-copa
- 12ml/e-CARE
- art
- tasksource/mmlu
- winogrande
- codah
- ai2_arc
- definite_pronoun_resolution
- swag
- math_qa
- metaeval/utilitarianism
- mteb/amazon_counterfactual
- SetFit/insincere-questions
- SetFit/toxic_conversations
- turingbench/TuringBench
- trec
- tals/vitaminc
- hope_edi
- strombergnlp/rumoureval_2019
- ethos
- tweet_eval
- discovery
- pragmeval
- silicone
- lex_glue
- papluca/language-identification
- imdb
- rotten_tomatoes
- ag_news
- yelp_review_full
- financial_phrasebank
- poem_sentiment
- dbpedia_14
- amazon_polarity
- app_reviews
- hate_speech18
- sms_spam
- humicroedit
- snips_built_in_intents
- banking77
- hate_speech_offensive
- yahoo_answers_topics
- pacovaldez/stackoverflow-questions
- zapsdcn/hyperpartisan_news
- zapsdcn/sciie
- zapsdcn/citation_intent
- go_emotions
- allenai/scicite
- liar
- relbert/lexical_relation_classification
- metaeval/linguisticprobing
- tasksource/crowdflower
- metaeval/ethics
- emo
- google_wellformed_query
- tweets_hate_speech_detection
- has_part
- wnut_17
- ncbi_disease
- acronym_identification
- jnlpba
- species_800
- SpeedOfMagic/ontonotes_english
- blog_authorship_corpus
- launch/open_question_type
- health_fact
- commonsense_qa
- mc_taco
- ade_corpus_v2
- prajjwal1/discosense
- circa
- PiC/phrase_similarity
- copenlu/scientific-exaggeration-detection
- quarel
- mwong/fever-evidence-related
- numer_sense
- dynabench/dynasent
- raquiba/Sarcasm_News_Headline
- sem_eval_2010_task_8
- demo-org/auditor_review
- medmcqa
- aqua_rat
- RuyuanWan/Dynasent_Disagreement
- RuyuanWan/Politeness_Disagreement
- RuyuanWan/SBIC_Disagreement
- RuyuanWan/SChem_Disagreement
- RuyuanWan/Dilemmas_Disagreement
- lucasmccabe/logiqa
- wiki_qa
- metaeval/cycic_classification
- metaeval/cycic_multiplechoice
- metaeval/sts-companion
- metaeval/commonsense_qa_2.0
- metaeval/lingnli
- metaeval/monotonicity-entailment
- metaeval/arct
- metaeval/scinli
- metaeval/naturallogic
- onestop_qa
- demelin/moral_stories
- corypaik/prost
- aps/dynahate
- metaeval/syntactic-augmentation-nli
- metaeval/autotnli
- lasha-nlp/CONDAQA
- openai/webgpt_comparisons
- Dahoas/synthetic-instruct-gptj-pairwise
- metaeval/scruples
- metaeval/wouldyourather
- sileod/attempto-nli
- metaeval/defeasible-nli
- metaeval/help-nli
- metaeval/nli-veridicality-transitivity
- metaeval/natural-language-satisfiability
- metaeval/lonli
- tasksource/dadc-limit-nli
- ColumbiaNLP/FLUTE
- metaeval/strategy-qa
- openai/summarize_from_feedback
- tasksource/folio
- metaeval/tomi-nli
- metaeval/avicenna
- stanfordnlp/SHP
- GBaker/MedQA-USMLE-4-options-hf
- GBaker/MedQA-USMLE-4-options
- sileod/wikimedqa
- declare-lab/cicero
- amydeng2000/CREAK
- metaeval/mutual
- inverse-scaling/NeQA
- inverse-scaling/quote-repetition
- inverse-scaling/redefine-math
- tasksource/puzzte
- metaeval/implicatures
- race
- metaeval/spartqa-yn
- metaeval/spartqa-mchoice
- metaeval/temporal-nli
- metaeval/ScienceQA_text_only
- AndyChiang/cloth
- metaeval/logiqa-2.0-nli
- tasksource/oasst1_dense_flat
- metaeval/boolq-natural-perturbations
- metaeval/path-naturalness-prediction
- riddle_sense
- Jiangjie/ekar_english
- metaeval/implicit-hate-stg1
- metaeval/chaos-mnli-ambiguity
- IlyaGusev/headline_cause
- metaeval/race-c
- metaeval/equate
- metaeval/ambient
- AndyChiang/dgen
- metaeval/clcd-english
- civil_comments
- metaeval/acceptability-prediction
- maximedb/twentyquestions
- metaeval/counterfactually-augmented-snli
- tasksource/I2D2
- sileod/mindgames
- metaeval/counterfactually-augmented-imdb
- metaeval/cnli
- metaeval/reclor
- tasksource/oasst1_pairwise_rlhf_reward
- tasksource/zero-shot-label-nli
- webis/args_me
- webis/Touche23-ValueEval
- tasksource/starcon
- tasksource/ruletaker
- lighteval/lsat_qa
- tasksource/ConTRoL-nli
- tasksource/tracie
- tasksource/sherliic
- tasksource/sen-making
- tasksource/winowhy
- mediabiasgroup/mbib-base
- tasksource/robustLR
- CLUTRR/v1
- tasksource/logical-fallacy
- tasksource/parade
- tasksource/cladder
- tasksource/subjectivity
- tasksource/MOH
- tasksource/VUAC
- tasksource/TroFi
- sharc_modified
- tasksource/conceptrules_v2
- tasksource/disrpt
- conll2000
- DFKI-SLT/few-nerd
- tasksource/com2sense
- tasksource/scone
- tasksource/winodict
- tasksource/fool-me-twice
- tasksource/monli
- tasksource/corr2cause
- tasksource/apt
- zeroshot/twitter-financial-news-sentiment
- tasksource/icl-symbol-tuning-instruct
- tasksource/SpaceNLI
- sihaochen/propsegment
- HannahRoseKirk/HatemojiBuild
- tasksource/regset
- tasksource/babi_nli
- lmsys/chatbot_arena_conversations
- tasksource/nlgraph
language: en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
pipeline_tag: zero-shot-classification
tags:
- deberta-v3-base
- deberta-v3
- deberta
- text-classification
- nli
- natural-language-inference
- multitask
- multi-task
- pipeline
- extreme-multi-task
- extreme-mtl
- tasksource
- zero-shot
- rlhf
model-index:
- name: deberta-v3-base-tasksource-nli
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
config: rte
split: validation
metrics:
- type: accuracy
value: 0.89
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: anli-r3
type: anli
config: plain_text
split: validation
metrics:
- type: accuracy
value: 0.52
name: Accuracy
---
# Model Card for DeBERTa-v3-base-tasksource-nli
---
**NOTE**
Deprecated: use https://huggingface.co/tasksource/deberta-small-long-nli for longer context and better accuracy.
---
This is [DeBERTa-v3-base](https://hf.co/microsoft/deberta-v3-base) fine-tuned with multi-task learning on 600+ tasks of the [tasksource collection](https://github.com/sileod/tasksource/).
This checkpoint has strong zero-shot validation performance on many tasks (e.g. 70% on WNLI), and can be used for:
- Zero-shot entailment-based classification for arbitrary labels [ZS].
- Natural language inference [NLI]
- Hundreds of previous tasks with tasksource-adapters [TA].
- Further fine-tuning on a new task or tasksource task (classification, token classification or multiple-choice) [FT].
# [ZS] Zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",model="sileod/deberta-v3-base-tasksource-nli")
text = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(text, candidate_labels)
```
NLI training data of this model includes [label-nli](https://huggingface.co/datasets/tasksource/zero-shot-label-nli), a NLI dataset specially constructed to improve this kind of zero-shot classification.
# [NLI] Natural language inference pipeline
```python
from transformers import pipeline
pipe = pipeline("text-classification",model="sileod/deberta-v3-base-tasksource-nli")
pipe([dict(text='there is a cat',
text_pair='there is a black cat')]) #list of (premise,hypothesis)
# [{'label': 'neutral', 'score': 0.9952911138534546}]
```
# [TA] Tasksource-adapters: 1 line access to hundreds of tasks
```python
# !pip install tasknet
import tasknet as tn
pipe = tn.load_pipeline('sileod/deberta-v3-base-tasksource-nli','glue/sst2') # works for 500+ tasksource tasks
pipe(['That movie was great !', 'Awful movie.'])
# [{'label': 'positive', 'score': 0.9956}, {'label': 'negative', 'score': 0.9967}]
```
The list of tasks is available in model config.json.
This is more efficient than ZS since it requires only one forward pass per example, but it is less flexible.
# [FT] Tasknet: 3 lines fine-tuning
```python
# !pip install tasknet
import tasknet as tn
hparams=dict(model_name='sileod/deberta-v3-base-tasksource-nli', learning_rate=2e-5)
model, trainer = tn.Model_Trainer([tn.AutoTask("glue/rte")], hparams)
trainer.train()
```
## Evaluation
This model ranked 1st among all models with the microsoft/deberta-v3-base architecture according to the IBM model recycling evaluation.
https://ibm.github.io/model-recycling/
### Software and training details
The model was trained on 600 tasks for 200k steps with a batch size of 384 and a peak learning rate of 2e-5. Training took 15 days on Nvidia A30 24GB gpu.
This is the shared model with the MNLI classifier on top. Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched.
https://github.com/sileod/tasksource/ \
https://github.com/sileod/tasknet/ \
Training code: https://colab.research.google.com/drive/1iB4Oxl9_B5W3ZDzXoWJN-olUbqLBxgQS?usp=sharing
# Citation
More details on this [article:](https://arxiv.org/abs/2301.05948)
```
@article{sileo2023tasksource,
title={tasksource: Structured Dataset Preprocessing Annotations for Frictionless Extreme Multi-Task Learning and Evaluation},
author={Sileo, Damien},
url= {https://arxiv.org/abs/2301.05948},
journal={arXiv preprint arXiv:2301.05948},
year={2023}
}
```
# Model Card Contact
[email protected]
</details> | [
"TEXT_CLASSIFICATION"
] | [
"HEAD-QA",
"JNLPBA",
"MEDQA",
"NCBI DISEASE",
"SCICITE",
"SCIQ",
"SCITAIL"
] | Non_BioNLP |
BatsResearch/Llama-3.1-8B-bonito-v1 | BatsResearch | text-generation | [
"safetensors",
"llama",
"task generation",
"synthetic datasets",
"text-generation",
"en",
"dataset:BatsResearch/ctga-v1",
"arxiv:2402.18334",
"license:llama3.1",
"region:us"
] | 1,723 | 1,723 | 267 | 5 | ---
datasets:
- BatsResearch/ctga-v1
language:
- en
license: llama3.1
pipeline_tag: text-generation
tags:
- task generation
- synthetic datasets
---
# Model Card for Llama-3.1-8B-bonito-v1
<!-- Provide a quick summary of what the model is/does. -->
Bonito is an open-source model for conditional task generation: the task of converting unannotated text into task-specific training datasets for instruction tuning.

## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
Bonito can be used to create synthetic instruction tuning datasets to adapt large language models on users' specialized, private data.
In our [paper](https://arxiv.org/abs/2402.18334), we show that Bonito can be used to adapt both pretrained and instruction tuned models to tasks without any annotations.
- **Developed by:** Nihal V. Nayak, Yiyang Nan, Avi Trost, and Stephen H. Bach
- **Model type:** LlamaForCausalLM
- **Language(s) (NLP):** English
- **License:** [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
- **Finetuned from model:** `meta-llama/Meta-Llama-3.1-8B`
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/BatsResearch/bonito](https://github.com/BatsResearch/bonito)
- **Paper:** [Learning to Generate Instruction Tuning Datasets for
Zero-Shot Task Adaptation](https://arxiv.org/abs/2402.18334)
### Model Performance
Downstream performance of Mistral-7B-v0.1 after training with Llama-3.1-8B-bonito-v1 generated instructions.
| Model | PubMedQA | PrivacyQA | NYT | Amazon | Reddit | ContractNLI | Vitamin C | Average |
|------------------------------------------|----------|-----------|------|--------|--------|-------------|-----------|---------|
| Mistral-7B-v0.1 | 25.6 | 44.1 | 24.2 | 17.5 | 12.0 | 31.2 | 38.9 | 27.6 |
| Mistral-7B-v0.1 + Llama-3.1-8B-bonito-v1 | 44.5 | 53.7 | 80.7 | 72.9 | 70.1 | 69.7 | 73.3 | 66.4 |
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
To easily generate synthetic instruction tuning datasets, we recommend using the [bonito](https://github.com/BatsResearch/bonito) package built using the `transformers` and the `vllm` libraries.
```python
from bonito import Bonito
from vllm import SamplingParams
from datasets import load_dataset
# Initialize the Bonito model
bonito = Bonito("BatsResearch/Llama-3.1-8B-bonito-v1")
# load dataaset with unannotated text
unannotated_text = load_dataset(
"BatsResearch/bonito-experiment",
"unannotated_contract_nli"
)["train"].select(range(10))
# Generate synthetic instruction tuning dataset
sampling_params = SamplingParams(max_tokens=256, top_p=0.95, temperature=0.5, n=1)
synthetic_dataset = bonito.generate_tasks(
unannotated_text,
context_col="input",
task_type="nli",
sampling_params=sampling_params
)
```
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Our model is trained to generate the following task types: summarization, sentiment analysis, multiple-choice question answering, extractive question answering, topic classification, natural language inference, question generation, text generation, question answering without choices, paraphrase identification, sentence completion, yes-no question answering, word sense disambiguation, paraphrase generation, textual entailment, and
coreference resolution.
The model might not produce accurate synthetic tasks beyond these task types.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
**Limitations**
Our work relies on the availability of large amounts of unannotated text.
If only a small quantity of unannotated text is present, the target language model, after adaptation, may experience a drop in performance.
While we demonstrate positive improvements on pretrained and instruction-tuned models, our observations are limited to the three task types (yes-no question answering, extractive question answering, and natural language inference) considered in our paper.
**Risks**
Bonito poses risks similar to those of any large language model.
For example, our model could be used to generate factually incorrect datasets in specialized domains.
Our model can exhibit the biases and stereotypes of the base model, Mistral-7B, even after extensive supervised fine-tuning.
Finally, our model does not include safety training and can potentially generate harmful content.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
We recommend users thoroughly inspect the generated tasks and benchmark performance on critical datasets before deploying the models trained with the synthetic tasks into the real world.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
To train Bonito, we create a new dataset called conditional task generation with attributes by remixing existing instruction tuning datasets.
See [ctga-v1](https://huggingface.co/datasets/BatsResearch/ctga-v1) for more details.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training Hyperparameters
- **Training regime:** <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
We train the model using [Q-LoRA](https://github.com/artidoro/qlora) by optimizing the cross entropy loss over the output tokens.
The model is trained for 100,000 steps.
The training takes about 1 day on eight A100 GPUs to complete.
We use the following hyperparameters:
- Q-LoRA rank (r): 64
- Q-LoRA scaling factor (alpha): 4
- Q-LoRA dropout: 0
- Optimizer: Paged AdamW
- Learning rate scheduler: linear
- Max. learning rate: 1e-04
- Min. learning rate: 0
- Weight decay: 0
- Dropout: 0
- Max. gradient norm: 0.3
- Effective batch size: 16
- Max. input length: 2,048
- Max. output length: 2,048
- Num. steps: 100,000
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@inproceedings{bonito:aclfindings24,
title = {Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation},
author = {Nayak, Nihal V. and Nan, Yiyang and Trost, Avi and Bach, Stephen H.},
booktitle = {Findings of the Association for Computational Linguistics: ACL 2024},
year = {2024}}
``` | [
"COREFERENCE_RESOLUTION",
"QUESTION_ANSWERING",
"TEXTUAL_ENTAILMENT",
"SUMMARIZATION"
] | [
"PUBMEDQA"
] | Non_BioNLP |
RichardErkhov/bigscience_-_bloom-3b-4bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,714 | 1,714 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloom-3b - bnb 4bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscience/bloom-3b/
Original model description:
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
model-index:
- name: bloom
results:
- task:
type: text-generation
name: text generation
dataset:
name: arc_challenge
type: arc_challenge
metrics:
- name: acc
type: acc
value: 0.27986348122866894
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: arc_easy
type: arc_easy
metrics:
- name: acc
type: acc
value: 0.5946969696969697
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: axb
type: axb
metrics:
- name: acc
type: acc
value: 0.4433876811594203
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: axg
type: axg
metrics:
- name: acc
type: acc
value: 0.5
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: boolq
type: boolq
metrics:
- name: acc
type: acc
value: 0.6165137614678899
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: cb
type: cb
metrics:
- name: acc
type: acc
value: 0.30357142857142855
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: cola
type: cola
metrics:
- name: acc
type: acc
value: 0.610738255033557
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: copa
type: copa
metrics:
- name: acc
type: acc
value: 0.63
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: crows_pairs_english
type: crows_pairs_english
metrics:
- name: acc
type: acc
value: 0.4973166368515206
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: crows_pairs_french
type: crows_pairs_french
metrics:
- name: acc
type: acc
value: 0.5032796660703638
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: diabla
type: diabla
metrics:
- name: acc
type: acc
value: 0.28888308977035493
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_afr
type: gsarti/flores_101_afr
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 6.500798737976343
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_amh
type: gsarti/flores_101_amh
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.9726863338897145
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ara
type: gsarti/flores_101_ara
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 1.8083841089875814
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_asm
type: gsarti/flores_101_asm
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.699102962086425
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ast
type: gsarti/flores_101_ast
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.9252047073429384
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_azj
type: gsarti/flores_101_azj
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 6.942805054270002
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_bel
type: gsarti/flores_101_bel
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.614136245847082
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ben
type: gsarti/flores_101_ben
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.121491534300969
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_bos
type: gsarti/flores_101_bos
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.653353469118798
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_bul
type: gsarti/flores_101_bul
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.7014693938055068
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_cat
type: gsarti/flores_101_cat
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.305190041967345
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ceb
type: gsarti/flores_101_ceb
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 6.291000321323428
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ces
type: gsarti/flores_101_ces
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.447322753586386
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ckb
type: gsarti/flores_101_ckb
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.7255124939234765
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_cym
type: gsarti/flores_101_cym
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 12.539424151448149
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_dan
type: gsarti/flores_101_dan
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.183309001005672
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_deu
type: gsarti/flores_101_deu
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.1180422286591347
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ell
type: gsarti/flores_101_ell
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.467943456164706
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_eng
type: gsarti/flores_101_eng
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.018740628193298
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_est
type: gsarti/flores_101_est
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 9.11654425176368
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_fas
type: gsarti/flores_101_fas
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.058009097116482
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_fin
type: gsarti/flores_101_fin
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 6.847047959628553
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_fra
type: gsarti/flores_101_fra
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 1.9975177011840075
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ful
type: gsarti/flores_101_ful
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 11.465912731488828
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_gle
type: gsarti/flores_101_gle
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 8.681491663539422
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_glg
type: gsarti/flores_101_glg
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.029991089015508
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_guj
type: gsarti/flores_101_guj
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.955224230286231
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hau
type: gsarti/flores_101_hau
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 10.758347356372159
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_heb
type: gsarti/flores_101_heb
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.6004478129801667
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hin
type: gsarti/flores_101_hin
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.712530650588064
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hrv
type: gsarti/flores_101_hrv
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.822418943372185
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hun
type: gsarti/flores_101_hun
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 6.440482646965992
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hye
type: gsarti/flores_101_hye
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.657718918347166
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ibo
type: gsarti/flores_101_ibo
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.564814003872672
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ind
type: gsarti/flores_101_ind
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.1597101468869373
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_isl
type: gsarti/flores_101_isl
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 8.082349269518136
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ita
type: gsarti/flores_101_ita
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.9687591414176207
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_jav
type: gsarti/flores_101_jav
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 7.0573805415708994
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_jpn
type: gsarti/flores_101_jpn
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.7758864197116933
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kam
type: gsarti/flores_101_kam
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 11.072949642861332
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kan
type: gsarti/flores_101_kan
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.551730651007082
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kat
type: gsarti/flores_101_kat
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.522630524283745
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kaz
type: gsarti/flores_101_kaz
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.3901748516975574
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kea
type: gsarti/flores_101_kea
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 8.918534182590863
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kir
type: gsarti/flores_101_kir
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.729278369847201
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kor
type: gsarti/flores_101_kor
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.932884847226212
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lao
type: gsarti/flores_101_lao
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.9077314760849924
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lav
type: gsarti/flores_101_lav
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 7.777221919194806
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lin
type: gsarti/flores_101_lin
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 7.524842908050988
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lit
type: gsarti/flores_101_lit
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 7.369179434621725
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ltz
type: gsarti/flores_101_ltz
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 8.801059747949214
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lug
type: gsarti/flores_101_lug
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 8.483203026364786
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_luo
type: gsarti/flores_101_luo
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 11.975963093623681
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mal
type: gsarti/flores_101_mal
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.615948455160037
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mar
type: gsarti/flores_101_mar
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.483253482821379
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mkd
type: gsarti/flores_101_mkd
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.9656732291754087
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mlt
type: gsarti/flores_101_mlt
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 15.004773437665275
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mon
type: gsarti/flores_101_mon
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.410598542315402
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mri
type: gsarti/flores_101_mri
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 7.474035895661322
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_msa
type: gsarti/flores_101_msa
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.5710001772665634
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mya
type: gsarti/flores_101_mya
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.413577969878331
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_nld
type: gsarti/flores_101_nld
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.127831721885065
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_nob
type: gsarti/flores_101_nob
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.402763169129877
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_npi
type: gsarti/flores_101_npi
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.199342701937889
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_nso
type: gsarti/flores_101_nso
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 8.154626800955667
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_nya
type: gsarti/flores_101_nya
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 8.179860208369393
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_oci
type: gsarti/flores_101_oci
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.8617357393685845
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_orm
type: gsarti/flores_101_orm
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 12.911595421079408
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ory
type: gsarti/flores_101_ory
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.189421861225964
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_pan
type: gsarti/flores_101_pan
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.698477289331806
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_pol
type: gsarti/flores_101_pol
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.625550458479643
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_por
type: gsarti/flores_101_por
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 1.9754515986213523
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_pus
type: gsarti/flores_101_pus
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.4963371422771585
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ron
type: gsarti/flores_101_ron
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.965456830031304
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_rus
type: gsarti/flores_101_rus
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.0498020542445303
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_slk
type: gsarti/flores_101_slk
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 6.450822127057479
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_slv
type: gsarti/flores_101_slv
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 6.620252120186232
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_sna
type: gsarti/flores_101_sna
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 8.462166771382726
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_snd
type: gsarti/flores_101_snd
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.466066951221973
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_som
type: gsarti/flores_101_som
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 11.95918054093392
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_spa
type: gsarti/flores_101_spa
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 1.8965140104323535
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_srp
type: gsarti/flores_101_srp
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.871214785885079
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_swe
type: gsarti/flores_101_swe
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.054972008155866
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_swh
type: gsarti/flores_101_swh
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.6973091886730676
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tam
type: gsarti/flores_101_tam
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.539493400469833
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tel
type: gsarti/flores_101_tel
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.807499987508966
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tgk
type: gsarti/flores_101_tgk
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 3.5994818827380426
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tgl
type: gsarti/flores_101_tgl
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.667053833119858
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tha
type: gsarti/flores_101_tha
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.365940201944242
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tur
type: gsarti/flores_101_tur
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 4.885014749844601
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ukr
type: gsarti/flores_101_ukr
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.7240934990288483
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_umb
type: gsarti/flores_101_umb
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 12.766915508610673
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_urd
type: gsarti/flores_101_urd
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 1.9797467071381232
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_uzb
type: gsarti/flores_101_uzb
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 12.002337637722146
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_vie
type: gsarti/flores_101_vie
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 1.76578415476397
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_wol
type: gsarti/flores_101_wol
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 9.144285650306488
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_xho
type: gsarti/flores_101_xho
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 7.403240538286952
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_yor
type: gsarti/flores_101_yor
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 5.91272037551173
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_zho_simpl
type: gsarti/flores_101_zho_simpl
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.2769070822768533
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_zho_trad
type: gsarti/flores_101_zho_trad
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 2.5180582198242383
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_zul
type: gsarti/flores_101_zul
metrics:
- name: byte_perplexity
type: byte_perplexity
value: 8.53353320693145
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: headqa
type: headqa
metrics:
- name: acc
type: acc
value: 0.26440554339897887
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: hellaswag
type: hellaswag
metrics:
- name: acc
type: acc
value: 0.41236805417247563
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: logiqa
type: logiqa
metrics:
- name: acc
type: acc
value: 0.2073732718894009
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mathqa
type: mathqa
metrics:
- name: acc
type: acc
value: 0.24958123953098826
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mc_taco
type: mc_taco
metrics:
- name: em
type: em
value: 0.11936936936936937
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mnli
type: mnli
metrics:
- name: acc
type: acc
value: 0.35496688741721855
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mnli_mismatched
type: mnli_mismatched
metrics:
- name: acc
type: acc
value: 0.35211554109031734
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mrpc
type: mrpc
metrics:
- name: acc
type: acc
value: 0.5857843137254902
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: multirc
type: multirc
metrics:
- name: acc
type: acc
value: 0.5375412541254125
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: openbookqa
type: openbookqa
metrics:
- name: acc
type: acc
value: 0.216
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: piqa
type: piqa
metrics:
- name: acc
type: acc
value: 0.7078346028291621
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: prost
type: prost
metrics:
- name: acc
type: acc
value: 0.22683603757472245
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: pubmedqa
type: pubmedqa
metrics:
- name: acc
type: acc
value: 0.616
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: qnli
type: qnli
metrics:
- name: acc
type: acc
value: 0.5072304594545122
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: qqp
type: qqp
metrics:
- name: acc
type: acc
value: 0.3842443729903537
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: race
type: race
metrics:
- name: acc
type: acc
value: 0.3521531100478469
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: rte
type: rte
metrics:
- name: acc
type: acc
value: 0.47653429602888087
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: sciq
type: sciq
metrics:
- name: acc
type: acc
value: 0.892
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: sst
type: sst
metrics:
- name: acc
type: acc
value: 0.5177752293577982
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: triviaqa
type: triviaqa
metrics:
- name: acc
type: acc
value: 0.041633518960487934
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: tydiqa_primary
type: tydiqa_primary
metrics:
- name: acc
type: acc
value: 0.3011337608795236
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: webqs
type: webqs
metrics:
- name: acc
type: acc
value: 0.01673228346456693
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: wic
type: wic
metrics:
- name: acc
type: acc
value: 0.5015673981191222
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: winogrande
type: winogrande
metrics:
- name: acc
type: acc
value: 0.5864246250986582
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: wnli
type: wnli
metrics:
- name: acc
type: acc
value: 0.471830985915493
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: wsc
type: wsc
metrics:
- name: acc
type: acc
value: 0.4423076923076923
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: humaneval
type: humaneval
metrics:
- name: pass@1
type: pass@1
value: 0.15524390243902436
verified: false
- name: pass@10
type: pass@10
value: 0.3220367632383857
verified: false
- name: pass@100
type: pass@100
value: 0.5545431515723145
verified: false
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 3,002,557,440 parameters:
* 642,252,800 embedding parameters
* 30 layers, 32 attention heads
* Hidden layers are 2560-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11c-2B5-logs)
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments)
- Server training location: Île-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of BLOOM models. Its focus is on aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Zero-shot evaluations:**
See this repository for JSON files: https://github.com/bigscience-workshop/evaluation-results
| Task | Language | Metric | BLOOM-2B5 |
|:----|:----|:----|:----:|
| arc_challenge | eng | acc ↑ | 0.28 |
| arc_easy | eng | acc ↑ | 0.595 |
| axb (Median of 10 prompts) | eng | acc ↑ | 0.443 |
| axg (Median of 10 prompts) | eng | acc ↑ | 0.5 |
| boolq (Median of 11 prompts) | eng | acc ↑ | 0.617 |
| cb (Median of 15 prompts) | eng | acc ↑ | 0.304 |
| cola (Median of 5 prompts) | eng | acc ↑ | 0.611 |
| copa (Median of 9 prompts) | eng | acc ↑ | 0.63 |
| crows_pairs_english (Median of 6 prompts) | eng | acc ↑ | 0.497 |
| crows_pairs_french (Median of 7 prompts) | fra | acc ↑ | 0.503 |
| diabla (Median of 2 prompts) | eng | acc ↑ | 0.289 |
| gsarti/flores_101_afr | afr | byte_perplexity ↓ | 6.501 |
| gsarti/flores_101_amh | amh | byte_perplexity ↓ | 3.973 |
| gsarti/flores_101_ara | ara | byte_perplexity ↓ | 1.808 |
| gsarti/flores_101_asm | asm | byte_perplexity ↓ | 5.699 |
| gsarti/flores_101_ast | ast | byte_perplexity ↓ | 3.925 |
| gsarti/flores_101_azj | azj | byte_perplexity ↓ | 6.943 |
| gsarti/flores_101_bel | bel | byte_perplexity ↓ | 3.614 |
| gsarti/flores_101_ben | ben | byte_perplexity ↓ | 5.121 |
| gsarti/flores_101_bos | bos | byte_perplexity ↓ | 5.653 |
| gsarti/flores_101_bul | bul | byte_perplexity ↓ | 2.701 |
| gsarti/flores_101_cat | cat | byte_perplexity ↓ | 2.305 |
| gsarti/flores_101_ceb | ceb | byte_perplexity ↓ | 6.291 |
| gsarti/flores_101_ces | ces | byte_perplexity ↓ | 5.447 |
| gsarti/flores_101_ckb | ckb | byte_perplexity ↓ | 3.726 |
| gsarti/flores_101_cym | cym | byte_perplexity ↓ | 12.539 |
| gsarti/flores_101_dan | dan | byte_perplexity ↓ | 5.183 |
| gsarti/flores_101_deu | deu | byte_perplexity ↓ | 3.118 |
| gsarti/flores_101_ell | ell | byte_perplexity ↓ | 2.468 |
| gsarti/flores_101_eng | eng | byte_perplexity ↓ | 2.019 |
| gsarti/flores_101_est | est | byte_perplexity ↓ | 9.117 |
| gsarti/flores_101_fas | fas | byte_perplexity ↓ | 3.058 |
| gsarti/flores_101_fin | fin | byte_perplexity ↓ | 6.847 |
| gsarti/flores_101_fra | fra | byte_perplexity ↓ | 1.998 |
| gsarti/flores_101_ful | ful | byte_perplexity ↓ | 11.466 |
| gsarti/flores_101_gle | gle | byte_perplexity ↓ | 8.681 |
| gsarti/flores_101_glg | glg | byte_perplexity ↓ | 3.03 |
| gsarti/flores_101_guj | guj | byte_perplexity ↓ | 4.955 |
| gsarti/flores_101_hau | hau | byte_perplexity ↓ | 10.758 |
| gsarti/flores_101_heb | heb | byte_perplexity ↓ | 3.6 |
| gsarti/flores_101_hin | hin | byte_perplexity ↓ | 4.713 |
| gsarti/flores_101_hrv | hrv | byte_perplexity ↓ | 5.822 |
| gsarti/flores_101_hun | hun | byte_perplexity ↓ | 6.44 |
| gsarti/flores_101_hye | hye | byte_perplexity ↓ | 3.658 |
| gsarti/flores_101_ibo | ibo | byte_perplexity ↓ | 5.565 |
| gsarti/flores_101_ind | ind | byte_perplexity ↓ | 2.16 |
| gsarti/flores_101_isl | isl | byte_perplexity ↓ | 8.082 |
| gsarti/flores_101_ita | ita | byte_perplexity ↓ | 2.969 |
| gsarti/flores_101_jav | jav | byte_perplexity ↓ | 7.057 |
| gsarti/flores_101_jpn | jpn | byte_perplexity ↓ | 2.776 |
| gsarti/flores_101_kam | kam | byte_perplexity ↓ | 11.073 |
| gsarti/flores_101_kan | kan | byte_perplexity ↓ | 5.552 |
| gsarti/flores_101_kat | kat | byte_perplexity ↓ | 2.523 |
| gsarti/flores_101_kaz | kaz | byte_perplexity ↓ | 3.39 |
| gsarti/flores_101_kea | kea | byte_perplexity ↓ | 8.919 |
| gsarti/flores_101_kir | kir | byte_perplexity ↓ | 3.729 |
| gsarti/flores_101_kor | kor | byte_perplexity ↓ | 3.933 |
| gsarti/flores_101_lao | lao | byte_perplexity ↓ | 2.908 |
| gsarti/flores_101_lav | lav | byte_perplexity ↓ | 7.777 |
| gsarti/flores_101_lin | lin | byte_perplexity ↓ | 7.525 |
| gsarti/flores_101_lit | lit | byte_perplexity ↓ | 7.369 |
| gsarti/flores_101_ltz | ltz | byte_perplexity ↓ | 8.801 |
| gsarti/flores_101_lug | lug | byte_perplexity ↓ | 8.483 |
| gsarti/flores_101_luo | luo | byte_perplexity ↓ | 11.976 |
| gsarti/flores_101_mal | mal | byte_perplexity ↓ | 4.616 |
| gsarti/flores_101_mar | mar | byte_perplexity ↓ | 5.483 |
| gsarti/flores_101_mkd | mkd | byte_perplexity ↓ | 2.966 |
| gsarti/flores_101_mlt | mlt | byte_perplexity ↓ | 15.005 |
| gsarti/flores_101_mon | mon | byte_perplexity ↓ | 3.411 |
| gsarti/flores_101_mri | mri | byte_perplexity ↓ | 7.474 |
| gsarti/flores_101_msa | msa | byte_perplexity ↓ | 2.571 |
| gsarti/flores_101_mya | mya | byte_perplexity ↓ | 2.414 |
| gsarti/flores_101_nld | nld | byte_perplexity ↓ | 4.128 |
| gsarti/flores_101_nob | nob | byte_perplexity ↓ | 5.403 |
| gsarti/flores_101_npi | npi | byte_perplexity ↓ | 5.199 |
| gsarti/flores_101_nso | nso | byte_perplexity ↓ | 8.155 |
| gsarti/flores_101_nya | nya | byte_perplexity ↓ | 8.18 |
| gsarti/flores_101_oci | oci | byte_perplexity ↓ | 4.862 |
| gsarti/flores_101_orm | orm | byte_perplexity ↓ | 12.912 |
| gsarti/flores_101_ory | ory | byte_perplexity ↓ | 5.189 |
| gsarti/flores_101_pan | pan | byte_perplexity ↓ | 4.698 |
| gsarti/flores_101_pol | pol | byte_perplexity ↓ | 4.626 |
| gsarti/flores_101_por | por | byte_perplexity ↓ | 1.975 |
| gsarti/flores_101_pus | pus | byte_perplexity ↓ | 4.496 |
| gsarti/flores_101_ron | ron | byte_perplexity ↓ | 4.965 |
| gsarti/flores_101_rus | rus | byte_perplexity ↓ | 2.05 |
| gsarti/flores_101_slk | slk | byte_perplexity ↓ | 6.451 |
| gsarti/flores_101_slv | slv | byte_perplexity ↓ | 6.62 |
| gsarti/flores_101_sna | sna | byte_perplexity ↓ | 8.462 |
| gsarti/flores_101_snd | snd | byte_perplexity ↓ | 5.466 |
| gsarti/flores_101_som | som | byte_perplexity ↓ | 11.959 |
| gsarti/flores_101_spa | spa | byte_perplexity ↓ | 1.897 |
| gsarti/flores_101_srp | srp | byte_perplexity ↓ | 2.871 |
| gsarti/flores_101_swe | swe | byte_perplexity ↓ | 5.055 |
| gsarti/flores_101_swh | swh | byte_perplexity ↓ | 3.697 |
| gsarti/flores_101_tam | tam | byte_perplexity ↓ | 4.539 |
| gsarti/flores_101_tel | tel | byte_perplexity ↓ | 5.807 |
| gsarti/flores_101_tgk | tgk | byte_perplexity ↓ | 3.599 |
| gsarti/flores_101_tgl | tgl | byte_perplexity ↓ | 5.667 |
| gsarti/flores_101_tha | tha | byte_perplexity ↓ | 2.366 |
| gsarti/flores_101_tur | tur | byte_perplexity ↓ | 4.885 |
| gsarti/flores_101_ukr | ukr | byte_perplexity ↓ | 2.724 |
| gsarti/flores_101_umb | umb | byte_perplexity ↓ | 12.767 |
| gsarti/flores_101_urd | urd | byte_perplexity ↓ | 1.98 |
| gsarti/flores_101_uzb | uzb | byte_perplexity ↓ | 12.002 |
| gsarti/flores_101_vie | vie | byte_perplexity ↓ | 1.766 |
| gsarti/flores_101_wol | wol | byte_perplexity ↓ | 9.144 |
| gsarti/flores_101_xho | xho | byte_perplexity ↓ | 7.403 |
| gsarti/flores_101_yor | yor | byte_perplexity ↓ | 5.913 |
| gsarti/flores_101_zho_simpl | zho_simpl | byte_perplexity ↓ | 2.277 |
| gsarti/flores_101_zho_trad | zho_trad | byte_perplexity ↓ | 2.518 |
| gsarti/flores_101_zul | zul | byte_perplexity ↓ | 8.534 |
| headqa | esp | acc ↑ | 0.264 |
| hellaswag | eng | acc ↑ | 0.412 |
| logiqa | eng | acc ↑ | 0.207 |
| mathqa | eng | acc ↑ | 0.25 |
| mc_taco | eng | em ↑ | 0.119 |
| mnli (Median of 15 prompts) | eng | acc ↑ | 0.355 |
| mnli_mismatched (Median of 15 prompts) | eng | acc ↑ | 0.352 |
| mrpc | eng | acc ↑ | 0.586 |
| multirc (Median of 11 prompts) | eng | acc ↑ | 0.538 |
| openbookqa | eng | acc ↑ | 0.216 |
| piqa | eng | acc ↑ | 0.708 |
| prost | eng | acc ↑ | 0.227 |
| pubmedqa | eng | acc ↑ | 0.616 |
| qnli | eng | acc ↑ | 0.507 |
| qqp (Median of 7 prompts) | eng | acc ↑ | 0.384 |
| race | eng | acc ↑ | 0.352 |
| rte (Median of 6 prompts) | eng | acc ↑ | 0.477 |
| sciq | eng | acc ↑ | 0.892 |
| sst (Median of 6 prompts) | eng | acc ↑ | 0.518 |
| triviaqa | eng | acc ↑ | 0.042 |
| tydiqa_primary (Median of 24 prompts) | eng | acc ↑ | 0.301 |
| webqs | eng | acc ↑ | 0.017 |
| wic (Median of 11 prompts) | eng | acc ↑ | 0.502 |
| winogrande | eng | acc ↑ | 0.586 |
| wnli (Median of 6 prompts) | eng | acc ↑ | 0.472 |
| wsc (Median of 11 prompts) | eng | acc ↑ | 0.442 |
| humaneval | python | pass@1 ↑ | 0.155 |
| humaneval | python | pass@10 ↑ | 0.322 |
| humaneval | python | pass@100 ↑ | 0.555 |
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"PUBMEDQA",
"SCIQ"
] | Non_BioNLP |
RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2309.09530",
"arxiv:2406.14491",
"endpoints_compatible",
"region:us"
] | 1,731 | 1,731 | 234 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
law-LLM-13B - GGUF
- Model creator: https://huggingface.co/AdaptLLM/
- Original model: https://huggingface.co/AdaptLLM/law-LLM-13B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [law-LLM-13B.Q2_K.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q2_K.gguf) | Q2_K | 4.52GB |
| [law-LLM-13B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [law-LLM-13B.Q3_K.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q3_K.gguf) | Q3_K | 5.9GB |
| [law-LLM-13B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [law-LLM-13B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [law-LLM-13B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [law-LLM-13B.Q4_0.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q4_0.gguf) | Q4_0 | 6.86GB |
| [law-LLM-13B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [law-LLM-13B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [law-LLM-13B.Q4_K.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q4_K.gguf) | Q4_K | 7.33GB |
| [law-LLM-13B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [law-LLM-13B.Q4_1.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q4_1.gguf) | Q4_1 | 7.61GB |
| [law-LLM-13B.Q5_0.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q5_0.gguf) | Q5_0 | 8.36GB |
| [law-LLM-13B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [law-LLM-13B.Q5_K.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q5_K.gguf) | Q5_K | 8.6GB |
| [law-LLM-13B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [law-LLM-13B.Q5_1.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q5_1.gguf) | Q5_1 | 9.1GB |
| [law-LLM-13B.Q6_K.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q6_K.gguf) | Q6_K | 9.95GB |
| [law-LLM-13B.Q8_0.gguf](https://huggingface.co/RichardErkhov/AdaptLLM_-_law-LLM-13B-gguf/blob/main/law-LLM-13B.Q8_0.gguf) | Q8_0 | 12.88GB |
Original model description:
---
language:
- en
datasets:
- Open-Orca/OpenOrca
- GAIR/lima
- WizardLM/WizardLM_evol_instruct_V2_196k
- EleutherAI/pile
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- legal
---
# Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024)
This repo contains the domain-specific base model developed from **LLaMA-1-13B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
### [2024/6/21] 🤗 We release the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain), effective for both pre-training from scratch and continual pre-training 🤗
**************************** **Updates** ****************************
* 2024/8/29: Updated [guidelines](https://huggingface.co/datasets/AdaptLLM/finance-tasks) on evaluating any 🤗Huggingface models on the domain-specific tasks
* 2024/6/22: Released the [benchmarking code](https://github.com/microsoft/LMOps/tree/main/adaptllm)
* 2024/6/21: Released the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain)
* 2024/4/2: Released the [raw data splits (train and test)](https://huggingface.co/datasets/AdaptLLM/ConvFinQA) of all the evaluation datasets
* 2024/1/16: Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024
* 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B
* 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B
* 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B
## 1. Domain-Specific Models
### LLaMA-1-7B
In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are:
<p align='center'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700">
</p>
### LLaMA-1-13B
Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B).
### LLaMA-2-Chat
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
For example, to chat with the law model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("AdaptLLM/law-LLM-13B")
tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/law-LLM-13B", use_fast=False)
# Put your input here:
user_input = '''Question: Which of the following is false about ex post facto laws?
Options:
- They make criminal an act that was innocent when committed.
- They prescribe greater punishment for an act than was prescribed when it was done.
- They increase the evidence required to convict a person than when the act was done.
- They alter criminal offenses or punishment in a substantially prejudicial manner for the purpose of punishing a person for some past activity.
Please provide your choice first and then provide explanations if possible.'''
# Simply use your input as the prompt for base models
prompt = user_input
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).input_ids.to(model.device)
outputs = model.generate(input_ids=inputs, max_length=2048)[0]
answer_start = int(inputs.shape[-1])
pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
print(pred)
```
### LLaMA-3-8B (💡New!)
In our recent research on [Instruction-Pretrain](https://huggingface.co/papers/2406.14491), we developed a context-based instruction synthesizer to augment the raw corpora with instruction-response pairs, **enabling Llama3-8B to be comparable to or even outperform Llama3-70B**: [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B), [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B).
## 2. Domain-Specific Tasks
### Pre-templatized Testing Splits
To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
Note: those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models.
### Evaluating Any Huggingface LMs on Domain-Specific Tasks (💡New!)
You can use the following script to reproduce our results and evaluate any other Huggingface models on domain-specific tasks. Note that the script is NOT applicable to models that require specific prompt templates (e.g., Llama2-chat, Llama3-Instruct).
1). **Set Up Dependencies**
```bash
git clone https://github.com/microsoft/LMOps
cd LMOps/adaptllm
pip install -r requirements.txt
```
2). **Evaluate the Model**
```bash
# Select the domain from ['biomedicine', 'finance', 'law']
DOMAIN='law'
# Specify any Huggingface model name (Not applicable to chat models)
MODEL='AdaptLLM/law-LLM-13B'
# Model parallelization:
# - Set MODEL_PARALLEL=False if the model fits on a single GPU.
# We observe that LMs smaller than 10B always meet this requirement.
# - Set MODEL_PARALLEL=True if the model is too large and encounters OOM on a single GPU.
MODEL_PARALLEL=True
# Choose the number of GPUs from [1, 2, 4, 8]
N_GPU=2
# Whether to add a BOS token at the beginning of the prompt input:
# - Set to False for AdaptLLM.
# - Set to True for instruction-pretrain models.
# If unsure, we recommend setting it to False, as this is suitable for most LMs.
add_bos_token=False
# Run the evaluation script
bash scripts/inference.sh ${DOMAIN} ${MODEL} ${add_bos_token} ${MODEL_PARALLEL} ${N_GPU}
```
### Raw Datasets
We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages: [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt), [RCT](https://huggingface.co/datasets/AdaptLLM/RCT), [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA), [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA), [Headline](https://huggingface.co/datasets/AdaptLLM/Headline), [NER](https://huggingface.co/datasets/AdaptLLM/NER), [FPB](https://huggingface.co/datasets/AdaptLLM/FPB)
### Domain Knowledge Probing
Our pre-processed knowledge probing datasets are available at: [med_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/med_knowledge_prob) and [law_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/law_knowledge_prob)
## Citation
If you find our work helpful, please cite us:
```bibtex
@inproceedings{
cheng2024adapting,
title={Adapting Large Language Models via Reading Comprehension},
author={Daixuan Cheng and Shaohan Huang and Furu Wei},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=y886UXPEZ0}
}
```
| [
"QUESTION_ANSWERING"
] | [
"CHEMPROT"
] | Non_BioNLP |
gmonsoon/llama3-8b-cpt-sahabatai-v1-instruct-GGUF | gmonsoon | null | [
"gguf",
"en",
"id",
"jv",
"su",
"arxiv:2309.06085",
"arxiv:2310.04928",
"arxiv:2311.07911",
"base_model:GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct",
"base_model:quantized:GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,731 | 1,731 | 2,445 | 1 | ---
base_model:
- GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
language:
- en
- id
- jv
- su
license: llama3
---
# Llama3 8B CPT Sahabat-AI v1 Instruct
**Sahabat-AI** (Indonesian language for “close friends”) is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for Indonesian language and its various dialects. Sahabat-AI ecosystem is co-initiated by Indonesian tech and telecommunication companies: GoTo Group and Indosat Ooredoo Hutchison.
Llama3 8B CPT Sahabat-AI v1 Instruct is an Indonesian-focused model which has been fine-tuned with around **448,000 Indonesian instruction-completion pairs** alongside an Indonesian-dialect pool consisting of **96,000 instruction-completion pairs in Javanese** and **98,000 instruction-completion pairs in Sundanese**. Additionally, we added a pool of **129,000 instruction-completion pairs in English**.
- **Co-initiated by:** PT GoTo Gojek Tokopedia Tbk, Indosat Ooredoo Hutchison
- **Developed by:** PT GoTo Gojek Tokopedia Tbk, AI Singapore
- **Model type:** Decoder
- **Languages:** English, Indonesian, Javanese, Sundanese
- **License:** [Llama3 Community License](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
## Model Details
### Model Description
We performed instruction tuning in Indonesian, Javanese, Sundanese as well as English on our [continued pre-trained Llama3 8B CPT Sahabat-AI v1 base](https://huggingface.co/GoToCompany/llama3-8b-cpt-sahabatai-v1-base), a decoder model using the Llama3 architecture, to create Llama3 8B CPT Sahabat-AI v1 Instruct.
For tokenisation, the model employs the default tokenizer used in Llama-3-8B. The model has a context length of 8192.
### Benchmark Performance
We evaluated Llama3 8B CPT Sahabat-AI V1 Instruct on both general language capabilities and instruction-following capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the
- [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
- These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
- We also added support for Javanese and Sundanese for the BHASA tasks whenever applicable
- [IndoMMLU](https://arxiv.org/pdf/2310.04928)
- These tasks include examination questions on Humanities, Indonesian language, Local languages and cultures, Social science and STEM across primary, middle, and high school levels.
- and the common English tasks from the [HuggingFace LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard).
- These tasks consist of [IFEval, BBH, Math Lvl 5, GPQA, MuSR, and MMLU-PRO.](https://huggingface.co/docs/leaderboards/open_llm_leaderboard/about)
- **Caveat**: Our results differ from the HuggingFace LLM Leaderboard because we have used [VLLM](https://docs.vllm.ai/en/latest/) as our inference platform. VLLM caps the context size at **4096 tokens** while HuggingFace was set to **8192 tokens**.
Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done **zero-shot** with native prompts on a sample of 100-1000 instances for each dataset.
#### Instruction-following Capabilities
Since Llama3 8B CPT Sahabat-AI v1 Instruct is an instruction-following model, we also evaluated it on instruction-following capabilities with the [IFEval](https://arxiv.org/abs/2311.07911) dataset.
As this dataset was in English, the linguists and native speakers in the team worked together to filter, localize and translate the dataset into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
**IFEval**
IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalized by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
*Note*: IFEval was only used on Bahasa Indonesia. We are currently working on adding it for Javanese and Sundanese for our upcoming releases.
#### Results
#### Indonesian Results
#### SEA HELM (also known as BHASA)
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px; font-weight: bold;">Language / Model Name [Instruct]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Overall (Bahasa Indonesia + Javanese + Sundanese)</td>
<td style="border: 1px solid gray; padding: 8px;">36.963</td>
<td style="border: 1px solid gray; padding: 8px;">42.988</td>
<td style="border: 1px solid gray; padding: 8px;">37.805</td>
<td style="border: 1px solid gray; padding: 8px;">45.866</td>
<td style="border: 1px solid gray; padding: 8px;">46.880</td>
<td style="border: 1px solid gray; padding: 8px;">56.359</td>
<td style="border: 2px solid black; padding: 8px;">53.725</td>
<td style="border: 1px solid gray; padding: 8px; background-color: lightgreen;">61.169</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Bahasa Indonesia</td>
<td style="border: 1px solid gray; padding: 8px;">46.760</td>
<td style="border: 1px solid gray; padding: 8px;">60.372</td>
<td style="border: 1px solid gray; padding: 8px;">42.022</td>
<td style="border: 1px solid gray; padding: 8px;">51.944</td>
<td style="border: 1px solid gray; padding: 8px;">54.579</td>
<td style="border: 1px solid gray; padding: 8px;">63.394</td>
<td style="border: 2px solid black; padding: 8px;">57.221</td>
<td style="border: 1px solid gray; padding: 8px; background-color: lightgreen;">64.154</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Javanese</td>
<td style="border: 1px solid gray; padding: 8px;">33.956</td>
<td style="border: 1px solid gray; padding: 8px;">40.625</td>
<td style="border: 1px solid gray; padding: 8px;">41.739</td>
<td style="border: 1px solid gray; padding: 8px;">47.587</td>
<td style="border: 1px solid gray; padding: 8px;">48.012</td>
<td style="border: 1px solid gray; padding: 8px;">56.468</td>
<td style="border: 2px solid black; padding: 8px;">56.460</td>
<td style="border: 1px solid gray; padding: 8px; background-color: lightgreen;">64.439</td>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Sundanese</td>
<td style="border: 1px solid gray; padding: 8px;">30.173</td>
<td style="border: 1px solid gray; padding: 8px;">27.969</td>
<td style="border: 1px solid gray; padding: 8px;">29.654</td>
<td style="border: 1px solid gray; padding: 8px;">38.068</td>
<td style="border: 1px solid gray; padding: 8px;">38.050</td>
<td style="border: 1px solid gray; padding: 8px;">49.216</td>
<td style="border: 2px solid black; padding: 8px;">47.495</td>
<td style="border: 1px solid gray; padding: 8px; background-color: lightgreen;">54.913</td>
</tr>
</table>
#### IndoMMLU
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px; font-weight: bold;">Model Name [Instruct]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Meta-Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Overall Results</td>
<td style="border: 1px solid gray; padding: 8px;">53.0%</td>
<td style="border: 1px solid gray; padding: 8px;">56.0%</td>
<td style="border: 1px solid gray; padding: 8px;">51.9%</td>
<td style="border: 1px solid gray; padding: 8px;">53.8%</td>
<td style="border: 1px solid gray; padding: 8px;">54.4%</td>
<td style="border: 1px solid gray; padding: 8px;">61.4%</td>
<td style="border: 2px solid black; padding: 8px;">55.6%</td>
<td style="border: 1px solid gray; padding: 8px; background-color: lightgreen;">62.6%</td>
</tr>
</table>
#### English Results
<table style="border-collapse: collapse; width: 100%; font-size: 10px">
<tr>
<th style="border: 2px solid black; padding: 8px;">Model Name [Instruct]</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Qwen2.5-7B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3-8B</th>
<th style="border: 1px solid gray; padding: 8px;">Llama-3.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sea-lionv2.1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">gemma-2-9B</th>
<th style="border: 2px solid black; padding: 8px;">sahabatai-v1-8B</th>
<th style="border: 1px solid gray; padding: 8px;">sahabatai-v1-9B</th>
</tr>
<tr>
<td style="border: 2px solid black; padding: 8px; font-weight: bold;">Average</td>
<td style="border: 1px solid gray; padding: 8px;">24.48</td>
<td style="border: 1px solid gray; padding: 8px;">27.75</td>
<td style="border: 1px solid gray; padding: 8px;">23.91</td>
<td style="border: 1px solid gray; padding: 8px;">27.98</td>
<td style="border: 1px solid gray; padding: 8px;">24.52</td>
<td style="border: 1px solid gray; padding: 8px;">26.44</td>
<td style="border: 2px solid black; padding: 8px;">24.43</td>
<td style="border: 1px solid gray; padding: 8px; background-color: lightgreen;">33.67</td>
</tr>
</table>
Llama3 8B CPT Sahabat-AI v1 Instruct can be run using the 🤗 Transformers library
```python
# Please use transformers==4.45.0
import torch
import transformers
model_id = "GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
# Javanese
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Sopo wae sing ana ing Punakawan?"}
]
outputs = pipeline(
messages,
max_new_tokens=256,
eos_token_id=terminators,
)
print(outputs[0]["generated_text"][-1])
# Sundanese
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Kumaha caritana si Kabayan?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
eos_token_id=terminators,
)
print(outputs[0]["generated_text"][-1])
```
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current Sahabat-AI models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
Llama3 8B CPT Sahabat-AI v1 Instruct was built using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 4 hours, with alignment taking 2 hours, both on 8x H100-80GB GPUs.
## Data
Llama3 8B CPT Sahabat-AI v1 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
## Call for Collaboration
Sahabat-AI (Indonesian language for “close friends”) a **local open source Large Language Model (LLM) ecosystem in Indonesian language**, co-initiated by Indonesian tech and telecommunication companies: GoTo Group and Indosat Ooredoo Hutchison.
Sahabat-AI ecosystem aims to empower Indonesians who want to develop AI-based services and applications using Bahasa Indonesia and its various local dialects.
We are supported by research centers and global tech experts such as AI Singapore and Tech Mahendra to train the model to gain general language understanding.
We also collaborate with key top Indonesia universities such as University of Indonesia, Gadjah Mada University, Bogor Institute of Agriculture, Bandung Institute of Technology, including top Indonesia media groups, such as Kompas Gramedia Group and Republika to train and enrich the model in Bahasa Indonesia, ensuring optimum provision of local context and cultural relevance.
We would like to invite **researchers, developers, and language enthusiasts** to actively contribute to the enhancement and expansion of Sahabat-AI.
Your collaborations can involve:
- Identifying and reporting technical issues
- Sharing pre-training, instruction, and preference data
- Improving documentation usability
- Proposing and implementing new model evaluation tasks and metrics
Join us in shaping the future of Sahabat-AI by sharing your expertise and insights to make these models more accessible, accurate, and versatile.
You can contribute your ideas through [this form.](https://docs.google.com/forms/d/1_us969eQtEooYOn4XkvGkdP5VHOyCbO6L_sd9kTMnaA/edit)
## The Development Team (in ascending alphabetical order)
### AI Singapore
Chan Adwin<br>
Cheng Nicholas<br>
Choa Esther<br>
Huang Yuli<br>
Lau Wayne<br>
Lee Chwan Ren<br>
Leong Wai Yi<br>
Leong Wei Qi<br>
Limkonchotiwat Peerat<br>
Liu Bing Jie Darius<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Brandon<br>
Ong Tat-Wee David<br>
Ong Zhi Hao<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teng Walter<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
### PT GoTo Gojek Tokopedia Tbk
Anissa Dininta<br>
Chau Shiau Ching<br>
Choiri Hendra Hadhil<br>
Goel Priyank<br>
Saini Ajay Kumar<br>
Shalev Ofir<br>
Tan Daryl<br>
Tep Kilian Rithi<br>
Tiwari Anupam<br>
Widjojo Daniel<br>
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [Sahabat-AI Inquiry Form.](https://docs.google.com/forms/d/1_us969eQtEooYOn4XkvGkdP5VHOyCbO6L_sd9kTMnaA/edit)
## Disclaimer
This is the repository for the Instruct model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## References
### IndoMMLU Reference
```bibtex
@inproceedings{koto-etal-2023-indommlu,
title = "Large Language Models Only Pass Primary School Exams in {I}ndonesia: A Comprehensive Test on {I}ndo{MMLU}",
author = "Fajri Koto and Nurul Aisyah and Haonan Li and Timothy Baldwin",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = December,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
}
}
``` | [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] | Non_BioNLP |
Alibaba-NLP/gte-Qwen2-7B-instruct | Alibaba-NLP | sentence-similarity | [
"sentence-transformers",
"safetensors",
"qwen2",
"text-generation",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"custom_code",
"arxiv:2308.03281",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,718 | 1,736 | 110,385 | 348 | ---
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
## gte-Qwen2-7B-instruct
**gte-Qwen2-7B-instruct** is the latest model in the gte (General Text Embedding) model family that ranks **No.1** in both English and Chinese evaluations on the Massive Text Embedding Benchmark [MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard) (as of June 16, 2024).
Recently, the [**Qwen team**](https://huggingface.co/Qwen) released the Qwen2 series models, and we have trained the **gte-Qwen2-7B-instruct** model based on the [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) LLM model. Compared to the [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) model, the **gte-Qwen2-7B-instruct** model uses the same training data and training strategies during the finetuning stage, with the only difference being the upgraded base model to Qwen2-7B. Considering the improvements in the Qwen2 series models compared to the Qwen1.5 series, we can also expect consistent performance enhancements in the embedding models.
The model incorporates several key advancements:
- Integration of bidirectional attention mechanisms, enriching its contextual understanding.
- Instruction tuning, applied solely on the query side for streamlined efficiency
- Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks.
## Model Information
- Model Size: 7B
- Embedding Dimension: 3584
- Max Input Tokens: 32k
## Requirements
```
transformers>=4.39.2
flash_attn>=2.5.6
```
## Usage
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-7B-instruct", trust_remote_code=True)
# In case you want to reduce the maximum length:
model.max_seq_length = 8192
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Infinity_emb
Usage via [infinity](https://github.com/michaelfeil/infinity), a MIT Licensed inference server.
```
# requires ~16-32GB VRAM NVIDIA Compute Capability >= 8.0
docker run \
-v $PWD/data:/app/.cache --gpus "0" -p "7997":"7997" \
michaelf34/infinity:0.0.68-trt-onnx \
v2 --model-id Alibaba-NLP/gte-Qwen2-7B-instruct --revision "refs/pr/38" --dtype bfloat16 --batch-size 8 --device cuda --engine torch --port 7997 --no-bettertransformer
```
## Evaluation
### MTEB & C-MTEB
You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-7B-instruct** on MTEB(English)/C-MTEB(Chinese):
| Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) |
|:----:|:---------:|:----------:|:----------:|:----------:|
| [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - |
| [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - |
| [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - |
| [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - |
| [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - |
| [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - |
| [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - |
| [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - |
| [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - |
| [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - |
| [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** |
| gte-Qwen2-1.5B-instruc(https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | 67.16 | 67.65 | 66.60 | 64.04 |
### GTE Models
The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture).
| Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) |
|:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:|
| [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB |
| [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB |
| [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB |
| [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB |
| [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB |
| [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB |
| [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB |
| [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB |
| [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB |
| [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB |
## Cloud API Services
In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud.
- [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Three versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service.
- [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
## Community support
### Fine-tuning
GTE models can be fine-tuned with a third party framework SWIFT.
```shell
pip install ms-swift -U
```
```shell
# check: https://swift.readthedocs.io/en/latest/BestPractices/Embedding.html
nproc_per_node=8
NPROC_PER_NODE=$nproc_per_node \
USE_HF=1 \
swift sft \
--model Alibaba-NLP/gte-Qwen2-7B-instruct \
--train_type lora \
--dataset 'sentence-transformers/stsb' \
--torch_dtype bfloat16 \
--num_train_epochs 10 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps $(expr 64 / $nproc_per_node) \
--eval_steps 100 \
--save_steps 100 \
--eval_strategy steps \
--use_chat_template false \
--save_total_limit 5 \
--logging_steps 5 \
--output_dir output \
--warmup_ratio 0.05 \
--learning_rate 5e-6 \
--deepspeed zero3 \
--dataloader_num_workers 4 \
--task_type embedding \
--loss_type cosine_similarity \
--dataloader_drop_last true
```
## Citation
If you find our paper or models helpful, please consider cite:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
RichardErkhov/croissantllm_-_CroissantLLMChat-v0.1-awq | RichardErkhov | null | [
"safetensors",
"llama",
"arxiv:2402.00786",
"4-bit",
"awq",
"region:us"
] | 1,736 | 1,736 | 9 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CroissantLLMChat-v0.1 - AWQ
- Model creator: https://huggingface.co/croissantllm/
- Original model: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1/
Original model description:
---
license: mit
datasets:
- croissantllm/croissant_dataset
- croissantllm/CroissantLLM-2201-sft
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
pipeline_tag: text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLMChat (190k steps + Chat)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 190k steps (2.99 T) tokens and a final Chat finetuning phase.
https://arxiv.org/abs/2402.00786
For best performance, it should be used with a temperature of 0.3 or more, and with the exact template described below:
```python
chat = [
{"role": "user", "content": "Que puis-je faire à Marseille en hiver?"},
]
chat_input = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
corresponding to:
```python
chat_input = """<|im_start|>user
{USER QUERY}<|im_end|>
<|im_start|>assistant\n"""
```
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
@misc{faysse2024croissantllm,
title={CroissantLLM: A Truly Bilingual French-English Language Model},
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2402.00786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Usage
This model is a Chat model, that is, it is finetuned for Chat function and works best with the provided template.
#### With generate
This might require a stopping criteria on <|im_end|> token.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/CroissantLLMChat-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
generation_args = {
"max_new_tokens": 256,
"do_sample": True,
"temperature": 0.3,
"top_p": 0.90,
"top_k": 40,
"repetition_penalty": 1.05,
"eos_token_id": [tokenizer.eos_token_id, 32000],
}
chat = [
{"role": "user", "content": "Qui est le président francais actuel ?"},
]
chat_input = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(chat_input, return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, **generation_args)
print(tokenizer.decode(tokens[0]))
# print tokens individually
print([(tokenizer.decode([tok]), tok) for tok in tokens[0].tolist()])
```
## Model limitations
Evaluation results indicate the model is strong in its size category, and offers decent performances on writing-based tasks and internal knowledge, and very strong performance on translation tasks. The small size of the CroissantLLM model however hinders its capacity to perform more complex reasoning-based tasks, at least in a zero or few-shot manner in its generalist base or chat-model versions. This is aligned with other models of size and underlines the importance of scale for more abstract tasks.
#### Knowledge Cutoff
The model training dataset has a data cutoff date corresponding to the November 2023 Wikipedia dump. This is the de facto knowledge cutoff date for our base model, although a lot of information dates back further. Updated versions can be trained through continued pre-training or subsequent fine-tuning.
#### Multilingual performance.
CroissantLLM is mostly a French and English model. Code performance is relatively limited, and although some amount of data from other languages is included within the SlimPajama training set, out-of-the-box performance in other languages is not to be expected, although some European languages do work quite well.
#### Hallucinations.
CroissantLLM can hallucinate and output factually incorrect data, especially regarding complex topics. This is to be expected given the small model size, and hallucination rates seem inferior to most models of the same size category although no quantitative assessments have been conducted outside of MT-Bench experiments.
| [
"TRANSLATION"
] | [
"CRAFT"
] | Non_BioNLP |
legalvn/paraphrase-multilingual-MiniLM-L12-v2-vn-81500 | legalvn | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:651725",
"loss:SoftmaxLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,733 | 1,733 | 6 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:651725
- loss:SoftmaxLoss
widget:
- source_sentence: Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào?
sentences:
- Chu kỳ kiểm định chất lượng giáo dục nghề nghiệp\n...\n2. Trường hợp cơ sở giáo
dục nghề nghiệp có ngành, nghề trọng điểm; chương trình đào tạo ngành, nghề trọng
điểm; cơ sở giáo dục nghề nghiệp và chương trình đào tạo các ngành, nghề phục
vụ yêu cầu công tác quản lý nhà nước phải thực hiện kiểm định chất lượng giáo
dục nghề nghiệp theo quy định tại điểm d khoản 3 Điều 65 của Luật Giáo dục nghề
nghiệp số 74/2014/QH13 ngày 27 tháng 11 năm 2014 nhưng không đạt tiêu chuẩn kiểm
định chất lượng giáo dục nghề nghiệp thì trong thời hạn 03 năm phải thực hiện
kiểm định lại.
- Vệ sinh môi trường, vệ sinh tòa nhà\n1. Trách nhiệm của các đơn vị, cán bộ, công
chức, viên chức, nhân viên và người lao động trong việc giữ gìn vệ sinh tại nơi
làm việc và khu vực công cộng:\na) Hàng ngày tự vệ sinh sàn nhà, bàn ghế, tủ,
các thiết bị được trang cấp và tổng vệ sinh phòng làm việc vào chiều thứ Sáu hàng
tuần;\nb) Có trách nhiệm thu gom rác thải trong phòng chuyển ra thùng rác đặt
tại các hành lang;\nc) Không đổ nước chè, cà phê, ….. xuống sàn nhà, hành lang,
tường nhà và khu vệ sinh;\nd) Nghiêm cấp hút thuốc lá trong phòng làm việc, phòng
họp, cầu thang máy, cầu thang bộ, tầng hầm;\nđ) Không khạc nhổ, bôi bẩn lên tường,
không vứt rác thải, gạt tàn thuốc lá, đầu lọc thuốc lá xuống sàn nhà và các khu
vực công cộng;\ne) Nghiêm cấm hái hoa, bẻ cành, dẫm lên thảm cỏ, nhổ cây trong
khuôn viên cơ quan.\ng) Nghiêm cấm mang chất độc hại vào cơ quan.\n…
- Nguyên tắc áp dụng\n1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối
tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một
mức phụ cấp ưu đãi theo nghề cao nhất.\n2. Công chức, viên chức đã hưởng phụ cấp
ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày
22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số
64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên
chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì
không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này.
- source_sentence: Số lượng thành viên Hội đồng khoa học và đào tạo là bao nhiêu?
sentences:
- 'Cấp Giấy chứng nhận chất lượng an toàn kỹ thuật và bảo vệ môi trường trong sản
xuất, lắp ráp ô tô, rơ moóc và sơ mi rơ moóc\n2.1. Trình tự thực hiện:\na) Nộp
hồ sơ TTHC:\n- Cơ sở sản xuất lập hồ sơ kiểm tra xe cơ giới theo quy định và nộp
đến Cục Đăng kiểm Việt Nam.\nb) Giải quyết TTHC:\n- Cục Đăng kiểm Việt Nam tiếp
nhận và kiểm tra thành phần hồ sơ kiểm tra xe cơ giới: nếu hồ sơ không đầy đủ
theo quy định thì hướng dẫn Cơ sở sản xuất hoàn thiện lại; Nếu hồ sơ đầy đủ theo
quy định thì thống nhất về thời gian và địa điểm thực hiện đánh giá điều kiện
kiểm tra chất lượng sản phẩm tại Cơ sở sản xuất;\n- Cục Đăng kiểm Việt Nam tiến
hành kiểm tra nội dung hồ sơ và thực hiện đánh giá điều kiện kiểm tra chất lượng
sản phẩm tại Cơ sở sản xuất theo quy định: Nếu chưa đạt yêu cầu thì thông báo
để Cơ sở sản xuất hoàn thiện lại; Nếu đạt yêu cầu thì cấp Giấy chứng nhận trong
thời hạn 03 ngày làm việc kể từ ngày kết thúc kiểm tra, đánh giá hồ sơ đầy đủ,
hợp lệ theo quy định và có kết quả đánh giá COP đạt yêu cầu;\n- Cơ sở sản xuất
nộp hồ sơ kiểm tra xe cơ giới và nhận kết quả trực tiếp tại trụ sở Cục Đăng kiểm
Việt Nam hoặc qua hệ thống bưu chính hoặc qua hệ thống dịch vụ công trực tuyến
hoặc qua hình thức phù hợp khác.\n...'
- Phiên họp Hội đồng khoa học\n1. Hội đồng khoa học họp định kỳ 06 tháng/01 lần.
Các phiên họp định kỳ phải có ít nhất 2/3 tổng số thành viên của Hội đồng khoa
học tham dự.\n2. Phiên họp đột xuất của Hội đồng khoa học được triệu tập theo
quyết định của Chủ tịch và phải có trên 1/2 số thành viên của Hội đồng khoa học
tham dự.\n3. Viện trưởng VKSND tối cao tham dự phiên họp của Hội đồng khoa học
khi thấy cần thiết.\n4. Tùy thuộc vào nội dung chương trình phiên họp, Chủ tịch
Hội đồng khoa học có thể quyết định mời các nhà khoa học trong và ngoài ngành
KSND tham gia phiên họp.\n5. Nội dung phiên họp, các tài liệu liên quan đến phiên
họp của Hội đồng khoa học phải được thông báo hoặc chuyển cho các Thành viên chậm
nhất là 3 ngày làm việc trước ngày họp, trừ trường hợp đột xuất.\n6. Hội đồng
khoa học thảo luận dân chủ, tập thể, công khai, quyết định theo đa số về những
vấn đề thuộc nội dung phiên họp và những vấn đề do Chủ tịch Hội đồng khoa học
nêu ra hoặc do các Thành viên đề nghị và được Chủ tịch Hội đồng khoa học chấp
thuận.\nChủ tịch Hội đồng khoa học chủ trì thảo luận và kết luận tại phiên họp.
Đối với những vấn đề phức tạp còn nhiều ý kiến khác nhau, Hội đồng khoa học tiến
hành biểu quyết. Những vấn đề được biểu quyết đạt trên 2/3 số phiếu của thành
viên có mặt hoặc trên 50% tổng số thành viên Hội đồng được coi là ý kiến chính
thức của Hội đồng khoa học. Các ý kiến khác được bảo lưu, ghi vào biên bản cuộc
họp.
- Hồ sơ, thủ tục công nhận liệt sĩ\n1. Người khi hy sinh đang thuộc quân đội, công
an quản lý thì Bộ Quốc phòng, Bộ Công an chịu trách nhiệm:\na) Hướng dẫn về quy
trình lập hồ sơ đề nghị công nhận liệt sĩ theo quy định.\nb) Có văn bản đề nghị
kèm hồ sơ gửi Bộ Lao động - Thương binh và Xã hội thẩm định trong thời gian không
quá 50 ngày kể từ ngày cơ quan, đơn vị trực tiếp quản lý người hy sinh xác lập,
hoàn thiện các giấy tờ quy định tại Điều 17 Nghị định này.
- source_sentence: Ban Tài chính Văn phòng Kiểm toán nhà nước thực hiện những chức
năng gì?
sentences:
- 'Tiếp nhận hồ sơ và trả kết quả\n...\n2.2.4. Lao động nam hoặc người chồng của
lao động nữ mang thai hộ nghỉ việc khi vợ sinh con: Bản sao giấy chứng sinh hoặc
bản sao giấy khai sinh hoặc trích lục khai sinh của con; trường hợp sinh con phải
phẫu thuật hoặc sinh con dưới 32 tuần tuổi mà giấy chứng sinh không thể hiện thì
có thêm giấy tờ của cơ sở khám bệnh, chữa bệnh thể hiện việc sinh con phải phẫu
thuật, sinh con dưới 32 tuần tuổi. Trường hợp con chết sau khi sinh mà chưa được
cấp giấy chứng sinh thì thay bằng trích sao hoặc tóm tắt hồ sơ bệnh án hoặc giấy
ra viện của người mẹ hoặc của lao động nữ mang thai hộ thể hiện con chết…'
- Việc tự giám sát chất lượng dịch vụ viễn thông của doanh nghiệp viễn thông\n1.
Các doanh nghiệp viễn thông được Bộ Thông tin và Truyền thông cấp giấy phép kinh
doanh dịch vụ viễn thông phải thường xuyên tự giám sát chất lượng dịch vụ đối
với tất cả các dịch vụ thuộc “Danh mục dịch vụ viễn thông bắt buộc quản lý chất
lượng” mà mình cung cấp.\n2. Trong trường hợp dịch vụ mà mình cung cấp có sự cố
thì doanh nghiệp viễn thông phải thực hiện báo cáo đột xuất như quy định tại Khoản
3 Điều 8 của Thông tư này.
- Cục Quản lý, giám sát bảo hiểm; Cục Quản lý Công sản; Cục Quản lý Giá; Cục Quản
lý Nợ và Tài chính đối ngoại; Cục Quản lý, giám sát Kế toán, Kiểm toán; Cục Quản
lý Công sản; Cục Tài chính doanh nghiệp và Vụ Tài chính ngân hàng chủ trì phối
hợp với Cục Tin học & Thống kê Tài chính xây dựng quy trình điện tử từng thủ tục
hành chính theo phạm vi quản lý đối với danh mục thủ tục hành chính để thực hiện
tích hợp trên Hệ thống thông tin Một cửa điện tử của Bộ Tài chính.
- source_sentence: Điều kiện để Giám đốc Học viện An ninh nhân dân được thăng cấp
bậc hàm trước thời hạn như thế nào?
sentences:
- Mức độ tự chủ và trách nhiệm\n- Có ý thức và tác phong nghề nghiệp đúng chuẩn
mực, có năng lực thực hiện công việc được giao; phương pháp làm việc khoa học,
biết phân tích và giải quyết các vấn đề mới về lĩnh vực chuyên môn nghề;\n- Gắn
bó nghề nghiệp; nghiêm chỉnh chấp hành quy chế, quy định của cơ quan, doanh nghiệp,
nơi đang công tác với ý thức tổ chức kỉ luật và tinh thần trách nhiệm cao trong
công việc;\n- Lập được các biện pháp an toàn và đảm bảo an toàn, vệ sinh lao động
trong quá trình làm việc; có ý thức trách nhiệm công dân, thái độ và đạo đức nghề
nghiệp đúng đắn, sẵn sàng nhận nhiệm vụ; tự tin, cầu tiến trong công việc; hợp
tác, thân thiện, khiêm tốn trong các mối quan hệ;\n- Tự chịu trách nhiệm về chất
lượng đối với kết quả công việc, sản phẩm do mình đảm nhiệm theo các tiêu chuẩn
và chịu một phần trách nhiệm đối với kết quả công việc, sản phẩm của tổ, nhóm;
- Tổ chức bộ máy\n...\n5. Tổng cục Hải quan có thể biệt phái công chức từ các đơn
vị thuộc và trực thuộc Tổng cục để bổ sung cán bộ chủ chốt, cán bộ kỹ thuật có
năng lực, kinh nghiệm cho Ban Quản lý dự án đầu tư xây dựng chuyên ngành của Tổng
cục Hải quan. Thời hạn biệt phái các công chức không quá 03 năm, trường hợp quá
03 năm mà chưa hoàn thành dự án thì Tổng cục Hải quan xem xét quyết định bổ sung
thời gian biệt phái.\nNhân sự tuyển dụng mới của Ban Quản lý dự án đầu tư xây
dựng chuyên ngành của Tổng cục Hải quan là viên chức hoặc hợp đồng lao động, thực
hiện theo quy định về chế độ tiền lương và các chế độ, chính sách đối với viên
chức và người lao động.\n...
- Biệt phái công chức\n...\n6. Không thực hiện biệt phái công chức nữ đang mang
thai hoặc nuôi con dưới 36 tháng tuổi.
- source_sentence: Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức,
viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?
sentences:
- Nhiệm vụ của giáo viên\n1. Thực hiện nhiệm vụ tổ chức các hoạt động dạy học, giáo
dục theo kế hoạch giáo dục của nhà trường và kế hoạch giáo dục của tổ chuyên môn;
quản lý học sinh trong các hoạt động giáo dục do nhà trường tổ chức; tham gia
các hoạt động chuyên môn; chịu trách nhiệm về chất lượng, hiệu quả giáo dục.\n2.
Trau dồi đạo đức, nêu cao tinh thần trách nhiệm, giữ gìn phẩm chất, danh dự, uy
tín của nhà giáo; gương mẫu trước học sinh; thương yêu, đối xử công bằng và tôn
trọng nhân cách của học sinh; bảo vệ các quyền và lợi ích chính đáng của học sinh;
đoàn kết, giúp đỡ đồng nghiệp.\n3. Học tập, rèn luyện để nâng cao sức khỏe, trình
độ chính trị, chuyên môn, nghiệp vụ, đổi mới phương pháp dạy học, giáo dục.\n4.
Tham gia tập huấn, bồi dưỡng chuyên môn, nghiệp vụ.\n5. Tham gia công tác phổ
cập giáo dục trung học cơ sở ở địa phương.\n6. Thực hiện nghĩa vụ công dân, các
quy định của pháp luật và của ngành Giáo dục, các quyết định của hiệu trưởng;
thực hiện nhiệm vụ do hiệu trưởng phân công, chịu sự kiểm tra, đánh giá của hiệu
trưởng và các cấp quản lý giáo dục.\n7. Phối hợp với Đội Thiếu niên Tiền phong
Hồ Chí Minh, Đoàn Thanh niên Cộng sản Hồ Chí Minh, Hội Liên hiệp Thanh niên Việt
Nam, gia đình học sinh và các tổ chức xã hội liên quan để tổ chức hoạt động giáo
dục.\n8. Thực hiện các nhiệm vụ khác theo quy định của pháp luật.
- “Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong
trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP
ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \n1. Máy
PCR. \n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \n3. Test kít
xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \n4. Máy thở chức năng
cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng
cao, máy thở xách tay. \n5. Máy lọc máu liên tục. \n6. Máy X-Quang di động. \n7.
Máy đo khí máu (đo được điện giải, lactat, hematocrite). \n8. Máy theo dõi bệnh
nhân>5 thông số. \n9. Bơm tiêm điện; Bơm truyền dịch. \n10. Máy phá rung tim có
tạo nhịp. \n11. Máy đo thời gian đông máu. \n12. Máy đo huyết động.”
- Thời điểm đánh giá xếp loại chất lượng hằng năm\n...\n2. Căn cứ tình hình thực
tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống
nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất
lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo
đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\n3. Tại thời điểm đánh giá,
xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ
chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm
làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ
được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá,
xếp loại chất lượng theo quy định của pháp luật và Quy chế này.
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("legalvn/paraphrase-multilingual-MiniLM-L12-v2-vn-81500")
# Run inference
sentences = [
'Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức, viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?',
'Thời điểm đánh giá xếp loại chất lượng hằng năm\\n...\\n2. Căn cứ tình hình thực tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\\n3. Tại thời điểm đánh giá, xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá, xếp loại chất lượng theo quy định của pháp luật và Quy chế này.',
'“Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \\n1. Máy PCR. \\n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \\n3. Test kít xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \\n4. Máy thở chức năng cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng cao, máy thở xách tay. \\n5. Máy lọc máu liên tục. \\n6. Máy X-Quang di động. \\n7. Máy đo khí máu (đo được điện giải, lactat, hematocrite). \\n8. Máy theo dõi bệnh nhân>5 thông số. \\n9. Bơm tiêm điện; Bơm truyền dịch. \\n10. Máy phá rung tim có tạo nhịp. \\n11. Máy đo thời gian đông máu. \\n12. Máy đo huyết động.”',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 651,725 training samples
* Columns: <code>queries</code>, <code>corpus</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | queries | corpus | score |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 9 tokens</li><li>mean: 24.71 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 121.6 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~43.80%</li><li>1: ~37.00%</li><li>2: ~19.20%</li></ul> |
* Samples:
| queries | corpus | score |
|:------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Người học ngành quản lý khai thác công trình thủy lợi trình độ cao đẳng phải có khả năng học tập và nâng cao trình độ như thế nào?</code> | <code>Khả năng học tập, nâng cao trình độ\n- Khối lượng khối lượng kiến thức tối thiểu, yêu cầu về năng lực mà người học phải đạt được sau khi tốt nghiệp ngành, nghề Dược trình độ cao đẳng có thể tiếp tục phát triển ở các trình độ cao hơn;\n- Người học sau tốt nghiệp có năng lực tự học, tự cập nhật những tiến bộ khoa học công nghệ trong phạm vi ngành, nghề để nâng cao trình độ hoặc học liên thông lên trình độ cao hơn trong cùng ngành nghề hoặc trong nhóm ngành, nghề hoặc trong cùng lĩnh vực đào tạo.</code> | <code>2</code> |
| <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nTrong phạm vi điều chỉnh của văn bản quy phạm pháp luật:\n1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.\n2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.\n3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> | <code>2</code> |
| <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Mục đích lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nLồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật (sau đây gọi tắt là văn bản) là một biện pháp để thực hiện mục tiêu bình đẳng giới, xóa bỏ phân biệt đối xử về giới, bảo đảm quyền, lợi ích hợp pháp, phù hợp với đặc thù của mỗi giới; tạo cơ hội phát triển như nhau cho nam và nữ trong các lĩnh vực của đời sống xã hội và gia đình; bảo đảm bình đẳng giới thực chất giữa nam và nữ.</code> | <code>1</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3.0
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:-----:|:-------------:|
| 0.0061 | 500 | 1.0473 |
| 0.0123 | 1000 | 1.0447 |
| 0.0184 | 1500 | 1.0383 |
| 0.0246 | 2000 | 1.0395 |
| 0.0307 | 2500 | 1.0436 |
| 0.0368 | 3000 | 1.0375 |
| 0.0430 | 3500 | 1.0189 |
| 0.0491 | 4000 | 1.0282 |
| 0.0552 | 4500 | 1.0355 |
| 0.0614 | 5000 | 1.0286 |
| 0.0675 | 5500 | 1.0264 |
| 0.0737 | 6000 | 1.0174 |
| 0.0798 | 6500 | 1.0238 |
| 0.0859 | 7000 | 1.0217 |
| 0.0921 | 7500 | 1.0203 |
| 0.0982 | 8000 | 1.0201 |
| 0.1043 | 8500 | 1.0266 |
| 0.1105 | 9000 | 1.0379 |
| 0.1166 | 9500 | 1.0367 |
| 0.1228 | 10000 | 1.0384 |
| 0.1289 | 10500 | 1.0291 |
| 0.1350 | 11000 | 1.0362 |
| 0.1412 | 11500 | 1.0354 |
| 0.1473 | 12000 | 1.0204 |
| 0.1534 | 12500 | 1.0401 |
| 0.1596 | 13000 | 1.0237 |
| 0.1657 | 13500 | 1.0271 |
| 0.1719 | 14000 | 1.0235 |
| 0.1780 | 14500 | 1.0329 |
| 0.1841 | 15000 | 1.0474 |
| 0.1903 | 15500 | 1.0547 |
| 0.1964 | 16000 | 1.0557 |
| 0.2025 | 16500 | 1.0626 |
| 0.2087 | 17000 | 1.0551 |
| 0.2148 | 17500 | 1.0526 |
| 0.2210 | 18000 | 1.125 |
| 0.2271 | 18500 | 1.2996 |
| 0.2332 | 19000 | 1.0703 |
| 0.2394 | 19500 | 1.0601 |
| 0.2455 | 20000 | 1.0835 |
| 0.2516 | 20500 | 1.0583 |
| 0.2578 | 21000 | 1.141 |
| 0.2639 | 21500 | 1.0802 |
| 0.2701 | 22000 | 1.0589 |
| 0.2762 | 22500 | 1.086 |
| 0.2823 | 23000 | 1.0743 |
| 0.2885 | 23500 | 1.0605 |
| 0.2946 | 24000 | 1.0602 |
| 0.3007 | 24500 | 1.0732 |
| 0.3069 | 25000 | 1.0614 |
| 0.3130 | 25500 | 1.0666 |
| 0.3192 | 26000 | 1.0669 |
| 0.3253 | 26500 | 1.0627 |
| 0.3314 | 27000 | 1.0659 |
| 0.3376 | 27500 | 1.07 |
| 0.3437 | 28000 | 1.0783 |
| 0.3498 | 28500 | 1.078 |
| 0.3560 | 29000 | 1.0832 |
| 0.3621 | 29500 | 1.0695 |
| 0.3683 | 30000 | 1.0714 |
| 0.3744 | 30500 | 1.3794 |
| 0.3805 | 31000 | 1.0838 |
| 0.3867 | 31500 | 1.0541 |
| 0.3928 | 32000 | 1.0799 |
| 0.3989 | 32500 | 1.0622 |
| 0.4051 | 33000 | 1.0597 |
| 0.4112 | 33500 | 1.0731 |
| 0.4174 | 34000 | 1.0871 |
| 0.4235 | 34500 | 1.0535 |
| 0.4296 | 35000 | 1.3215 |
| 0.4358 | 35500 | 1.1501 |
| 0.4419 | 36000 | 1.1088 |
| 0.4480 | 36500 | 1.0844 |
| 0.4542 | 37000 | 1.0981 |
| 0.4603 | 37500 | 1.0856 |
| 0.4665 | 38000 | 1.0956 |
| 0.4726 | 38500 | 1.0813 |
| 0.4787 | 39000 | 1.0843 |
| 0.4849 | 39500 | 1.1053 |
| 0.4910 | 40000 | 1.092 |
| 0.4971 | 40500 | 1.081 |
| 0.5033 | 41000 | 1.0919 |
| 0.5094 | 41500 | 1.0681 |
| 0.5156 | 42000 | 1.0826 |
| 0.5217 | 42500 | 1.0809 |
| 0.5278 | 43000 | 1.093 |
| 0.5340 | 43500 | 1.0709 |
| 0.5401 | 44000 | 1.0623 |
| 0.5462 | 44500 | 1.0801 |
| 0.5524 | 45000 | 1.0833 |
| 0.5585 | 45500 | 1.0816 |
| 0.5647 | 46000 | 1.0697 |
| 0.5708 | 46500 | 1.0864 |
| 0.5769 | 47000 | 1.0744 |
| 0.5831 | 47500 | 1.0897 |
| 0.5892 | 48000 | 1.0727 |
| 0.5953 | 48500 | 1.0621 |
| 0.6015 | 49000 | 1.0582 |
| 0.6076 | 49500 | 1.0681 |
| 0.6138 | 50000 | 1.083 |
| 0.6199 | 50500 | 1.0632 |
| 0.6260 | 51000 | 1.0809 |
| 0.6322 | 51500 | 1.0525 |
| 0.6383 | 52000 | 1.6649 |
| 0.6444 | 52500 | 1.0873 |
| 0.6506 | 53000 | 1.0649 |
| 0.6567 | 53500 | 1.0591 |
| 0.6629 | 54000 | 1.061 |
| 0.6690 | 54500 | 1.0682 |
| 0.6751 | 55000 | 1.0616 |
| 0.6813 | 55500 | 1.0827 |
| 0.6874 | 56000 | 1.0799 |
| 0.6935 | 56500 | 1.0705 |
| 0.6997 | 57000 | 1.0821 |
| 0.7058 | 57500 | 1.0763 |
| 0.7120 | 58000 | 1.0842 |
| 0.7181 | 58500 | 1.0813 |
| 0.7242 | 59000 | 1.0678 |
| 0.7304 | 59500 | 1.0894 |
| 0.7365 | 60000 | 1.0733 |
| 0.7426 | 60500 | 1.0688 |
| 0.7488 | 61000 | 1.0665 |
| 0.7549 | 61500 | 1.0681 |
| 0.7611 | 62000 | 1.301 |
| 0.7672 | 62500 | 1.0907 |
| 0.7733 | 63000 | 1.3941 |
| 0.7795 | 63500 | 1.1355 |
| 0.7856 | 64000 | 1.2196 |
| 0.7917 | 64500 | 1.225 |
| 0.7979 | 65000 | 1.1437 |
| 0.8040 | 65500 | 1.0787 |
| 0.8102 | 66000 | 1.0686 |
| 0.8163 | 66500 | 1.1017 |
| 0.8224 | 67000 | 1.0999 |
| 0.8286 | 67500 | 1.0771 |
| 0.8347 | 68000 | 1.1015 |
| 0.8408 | 68500 | 1.0826 |
| 0.8470 | 69000 | 1.1046 |
| 0.8531 | 69500 | 1.0735 |
| 0.8593 | 70000 | 1.1056 |
| 0.8654 | 70500 | 1.1077 |
| 0.8715 | 71000 | 1.0897 |
| 0.8777 | 71500 | 1.0775 |
| 0.8838 | 72000 | 1.0907 |
| 0.8899 | 72500 | 1.0705 |
| 0.8961 | 73000 | 1.0776 |
| 0.9022 | 73500 | 1.0896 |
| 0.9084 | 74000 | 1.0889 |
| 0.9145 | 74500 | 1.0804 |
| 0.9206 | 75000 | 1.1087 |
| 0.9268 | 75500 | 1.0738 |
| 0.9329 | 76000 | 1.0806 |
| 0.9390 | 76500 | 1.0899 |
| 0.9452 | 77000 | 1.0814 |
| 0.9513 | 77500 | 1.0723 |
| 0.9575 | 78000 | 1.0923 |
| 0.9636 | 78500 | 1.0748 |
| 0.9697 | 79000 | 1.0745 |
| 0.9759 | 79500 | 1.081 |
| 0.9820 | 80000 | 1.08 |
| 0.9881 | 80500 | 1.0905 |
| 0.9943 | 81000 | 1.1064 |
| 1.0004 | 81500 | 1.0929 |
</details>
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.3.1
- Transformers: 4.43.0
- PyTorch: 2.5.0+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"PCR"
] | Non_BioNLP |
predibase/bc5cdr | predibase | text-generation | [
"peft",
"safetensors",
"text-generation",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | 1,708 | 1,708 | 74 | 1 | ---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
pipeline_tag: text-generation
---
Description: 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions.\
Original dataset: https://huggingface.co/datasets/tner/bc5cdr \
---\
Try querying this adapter for free in Lora Land at https://predibase.com/lora-land! \
The adapter_category is Named Entity Recognition and the name is Chemical and Disease Recognition (bc5cdr)\
---\
Sample input: Your task is a Named Entity Recognition (NER) task. Predict the category of each entity, then place the entity into the list associated with the category in an output JSON payload. Below is an example:
Input: "Naloxone reverses the antihypertensive effect of clonidine ."
Output: {'B-Chemical': ['Naloxone', 'clonidine'], 'B-Disease': [], 'I-Disease': [], 'I-Chemical': []}
Now, complete the task.
Input: "A standardized loading dose of VPA was administered , and venous blood was sampled at 0 , 1 , 2 , 3 , and 4 hours ."
Output: \
---\
Sample output: {'B-Chemical': ['VPA'], 'B-Disease': [], 'I-Disease': [], 'I-Chemical': []}\
---\
Try using this adapter yourself!
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "predibase/bc5cdr"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
``` | [
"NAMED_ENTITY_RECOGNITION"
] | [
"BC5CDR"
] | BioNLP |
RichardErkhov/EleutherAI_-_pythia-410m-deduped-v0-8bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,713 | 1,713 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-deduped-v0 - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-410m-deduped-v0/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
- pythia_v0
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/models?other=pythia).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
## Pythia-410M-deduped
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change in the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-410M-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-410M-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-410M-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-410M-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-410M-deduped to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-410M-deduped may produce socially unacceptable or undesirable text,
*even if* the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-410M-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
Pythia-410M-deduped was trained on the Pile **after the dataset has been
globally deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
#### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge – Challenge Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/>
</details>
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] | Non_BioNLP |
RichardErkhov/phamhai_-_Llama-3.2-3B-Instruct-Frog-4bits | RichardErkhov | null | [
"safetensors",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,732 | 1,732 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-Frog - bnb 4bits
- Model creator: https://huggingface.co/phamhai/
- Original model: https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog/
Original model description:
---
license: llama3.2
language:
- en
- vi
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
tags:
- RAG
- Vietnamese
- Generation
- Function_Calling
- Function Calling
- FC
- Summarization
- Rewriting
- Functions
- VLLM
- LLM
---
<p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6612cc790b91dd96968028f9/yP51EyRNg-CHCKB4gBYan.png" width="300" /> </p>
<h1>Llama-3.2-3B-Instruct-Frog - a RAG-optimized LLaMA3.2 for Vietnamese</h1>
**Quantized Version**: [phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog-Q4_K_M-GGUF)
At the end of September 2024, Meta released two lightweight LLM model versions: [Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) and [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct). However, these models are not well-supported for Vietnamese, especially for tasks related to Retrieval-Augmented Generation (RAG).
Today, I am excited to announce the release of two models specifically trained to provide better support for Vietnamese RAG tasks.
<h2>Model Details:</h2>
+ Base Models: Llama-3.2-1B-Instruct and Llama-3.2-3B-Instruct
+ Performance: The models are optimized for fast inference and can be easily deployed on on-premise and edge devices (laptop/smartphone/NVIDIA Jetson Xavier/Raspberry Pi,ect).
+ Model weights:
+ [Llama-3.2-1B-Instruct-Frog](https://huggingface.co/phamhai/Llama-3.2-1B-Instruct-Frog): 131K context length, 1 billion parameters
+ [Llama-3.2-3B-Instruct-Frog](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog): 131K context length, 3 billion parameters
<blockquote style="color:red"> <p><strong style="color: red">Terms of Use and License</strong>: By using our released weights, you agree to and comply with the terms and conditions specified in Meta's LLaMA-3 license.</blockquote>
<h2>Model Evaluation</h2>
We evaluated this model on the [VLMU benchmark](https://vmlu.ai/) and achieved an accuracy of **45.13**. However, this benchmark is not the focus of our current efforts. We believe it will be very difficult for language models with fewer than 13 billion parameters to retain enough knowledge to answer questions across diverse user contexts, especially for smaller models with under 3 billion parameters. For the model to effectively handle real-world business scenarios and avoid hallucinations, it is almost essential to supplement knowledge from external sources (through RAG). Therefore, we developed this model with a primary focus on optimizing its RAG capabilities. Internal testing is currently underway and will be updated soon.
***Update***:
Function Calling Benchmark: https://huggingface.co/datasets/phamhai/Vietnamese-Function-Calling-Test
| Model | Model size | Function name Acc (%) | Exact Match Acc (%)
| ------------ | ------------------ | ---------- | --------- |
| [phamhai/Llama-3.2-3B-Instruct-Frog](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog) | ~3B | 95.79 | 51.05 |
| [Gemini-1.5-Pro](https://deepmind.google/technologies/gemini/pro/) | --- | 96.96 | 55.16 |
| [Gemini-1.5-Flash](https://deepmind.google/technologies/gemini/flash/) | --- | 97.10 | 51.64 |
| [Gemini-1.5-Flash-8B](https://deepmind.google/technologies/gemini/flash/) | --- | 97.38 | 64.75 |
| [gpt-4o-2024-08-06](https://platform.openai.com/docs/models#gpt-4o) | --- | 94.38 | 52.88 |
| [arcee-ai/Arcee-VyLinh](https://huggingface.co/arcee-ai/Arcee-VyLinh) | ~3B | | |
| [phamhai/Llama-3.2-3B-Instruct-Frog-Pro](https://huggingface.co/phamhai/Llama-3.2-3B-Instruct-Frog-Pro) | ~3B | 98.12 | 56.38 |
<p align="left"> Table 1. Vietnamese Function Calling Benchmark </p>
<h2> Run the model </h2>
(*Disclaimer: The name of the bot is called Vivi, which is due to my passion for VinFast vehicles, and I also hope to develop my own smaller models for VinFast's car lines (which they refer to as their virtual assistant, Vivi). This model has no affiliation with VinFast or any related entities.*)
<h3> with Huggingface's transformers </h3>
<h4> 1. QnA task </h4>
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "phamhai/Llama-3.2-3B-Instruct-Frog"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
messages = [
{"role": "system", "content": "Bạn là một người bạn gái xinh đẹp. Tên của bạn là Vivi. Hãy luôn xưng là Vivi, gọi người nói là anh và trả lời luôn bắt đầu bằng cụm từ Dạ thưa anh yêu của em."},
{"role": "user", "content": "xin chào em"}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
# Dạ thưa anh yêu của em, em rất vui được gặp anh.
messages = [
{"role": "system", "content": "Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.\nNếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch."},
{"role": "user", "content": "Làm sao để chữa bệnh đau đầu?"}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
# Có nhiều nguyên nhân gây đau đầu, bao gồm căng thẳng, thiếu ngủ, mất nước, chứng đau đầu thường xuyên, đau đầu do chứng đau nửa đầu, và nhiều hơn nữa. Dưới đây là một số cách để giảm đau đầu:
# 1. Nghỉ ngơi: Nếu đau đầu là do căng thẳng hoặc thiếu ngủ, hãy nghỉ ngơi và ngủ đủ giấc.
# 2. Massage: Massage vùng cổ và vai có thể giúp giảm đau đầu.
# 3. Uống nước: Đảm bảo bạn uống đủ nước để giữ cho cơ thể luôn được cung cấp đủ nước.
# 4. Sử dụng thuốc giảm đau: Nếu đau đầu không giảm sau khi nghỉ ngơi và uống nước, bạn có thể sử dụng thuốc giảm đau như paracetamol hoặc ibuprofen.
# 5. Sử dụng băng lạnh: Nếu đau đầu do chứng đau nửa đầu, bạn có thể sử dụng băng lạnh để giảm đau.
# 6. Thay đổi chế độ ăn uống: Nếu đau đầu liên quan đến chế độ ăn uống của bạn, hãy thay đổi chế độ ăn uống để giảm đau đầu.
# Nếu đau đầu kéo dài hoặc trở nên nghiêm trọng hơn, bạn nên tìm kiếm sự giúp đỡ y tế để được chẩn đoán và điều trị đúng cách.
```
<h4> 2. Summarization task </h4>
<h5> Focused Answer </h5>
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.
Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch.
Context:
Đoạn 0: "Chính phủ đề xuất bổ sung gần 20.700 tỷ đồng vốn điều lệ cho Ngân hàng Ngoại thương Việt Nam (Vietcombank) từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Chiều 23/10, thừa ủy quyền Chính phủ, Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc trình Quốc hội về bổ sung vốn Nhà nước tại Ngân hàng Ngoại Thương Việt Nam (Vietcombank). Theo đó, Chính phủ đề nghị tăng vốn điều lệ cho ngân hàng này gần 20.700 tỷ đồng từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Số tiền này lấy từ nguồn lợi nhuận còn lại lũy kế đến hết năm 2018 và lãi còn lại năm 2021. Vốn điều lệ dự kiến rót thêm cho Vietcombank gần bằng lợi nhuận hợp nhất trước thuế nửa đầu năm nay của nhà băng này. Việc bổ sung vốn cho "ông lớn" ngân hàng quốc doanh được Phó thủ tướng nhấn mạnh là cấp thiết để duy trì tỷ lệ vốn góp Nhà nước, phù hợp chiến lược phát triển kinh tế xã hội, tạo nguồn lực hỗ trợ ngân hàng yếu kém. Phó thủ tướng cho biết, phần lợi nhuận còn lại lũy kế hết năm 2018 và lãi còn lại 2021 hiện được hạch toán theo dõi tại VCB, chưa nằm trong cân đối ngân sách Nhà nước. Do vậy, nguồn vốn đề xuất tăng cho ngân hàng này không ảnh hưởng tới kế hoạch dự toán thu chi ngân sách 2024-2025. Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Vốn điều lệ của Vietcombank hiện là 55.891 tỷ đồng, thấp hơn nhiều so với VPBank (79.339 tỷ đồng), Techcombank (70.450 tỷ đồng) và không có sự cách biệt lớn so với một số ngân hàng thương mại cổ phần như MB (52.871) tỷ đồng, ACB (44.667 tỷ đồng) và SHB (36.629 tỷ đồng). Ngoài ra, việc tăng vốn nhằm để ngân hàng này đáp ứng các tỷ lệ an toàn tối thiểu. Tính tới cuối 2023, tỷ lệ an toàn vốn (CAR) của ngân hàng này là 11,05%, đảm bảo quy định. Tuy nhiên, mức này thấp hơn các ngân hàng thương mại cổ phần (VPBank, MB là 12-13%; Techcombank 13-15%...) và các nhà băng trong khu vực (Singapore là 17,1%, Indonesia 23,27%...). Thẩm tra nội dung này, Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh cho rằng đề xuất tăng vốn cho Vietcombank bảo đảm cơ sở pháp lý và đúng thẩm quyền theo quy định. Tuy nhiên, Ủy ban Kinh tế đề nghị Chính phủ lấy ý kiến của cổ đông chiến lược nước ngoài Ngân hàng Mizuho Corporate Bank - đơn vị nắm 15% vốn điều lệ của Vietcombank. Việc này nhằm thuận lợi trong quá trình tăng vốn. Chính phủ cũng cần bổ sung thông tin hiện trạng vốn của Vietcombank so với các ngân hàng thương mại trong hệ thống hiện nay. "Có ý kiến đề nghị làm rõ nhận định nguồn vốn đề xuất để tăng vốn điều lệ không tác động đến ngân sách Nhà nước", ông Thanh cho biết. Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh đề nghị Chính phủ chỉ đạo Ngân hàng Nhà nước cùng các bộ, ngành liên quan xử lý phần lợi nhuận còn lại năm 2022, 2023 (lần lượt là 21.680 tỷ và 25.009 tỷ đồng), nhằm tăng năng lực tài chính cho Vietcombank, bù đắp mức thiếu hụt vốn tự có, bảo đảm an toàn hoạt động. Cơ quan thẩm tra lưu ý vốn được bổ sung cho Vietcombank cần được dùng để mở rộng kinh doanh, cung ứng tín dụng với các lĩnh vực, dự án quan trọng quốc gia quy mô lớn, giảm lãi suất cho vay, cũng như đổi mới mô hình quản trị, chất lượng dịch vụ của nhà băng này. "Chính phủ cần đánh giá kỹ tác động việc bổ sung vốn Nhà nước cho Vietcombank tới phát triển của ngành ngân hàng, hiệu quả kinh tế xã hội", Ủy ban Kinh tế lưu ý. Vietcombank là một trong 4 ngân hàng thương mại Nhà nước, bên cạnh BIDV, VietinBank và Agribank. Ngân hàng này do Nhà nước sở hữu 74,8% vốn điều lệ. Lũy kế nửa đầu năm nay, lợi nhuận hợp nhất trước thuế của nhà băng này đạt 20.835 tỷ đồng, tăng 1,6% so với cùng kỳ 2023. Với dữ liệu này, Vietcombank tiếp tục đứng đầu toàn hệ thống ngân hàng về lợi nhuận 6 tháng đầu năm. Đây cũng là mức lãi nửa đầu năm cao kỷ lục của nhà băng này. Tính đến 30/6, tổng tài sản của ngân hàng đạt hơn 1,9 triệu tỷ đồng, tăng 3,6% so với cuối 2023. Trong đó, cho vay khách hàng gần 1,37 triệu tỷ đồng, tăng 7,8%."
Đoạn 1: "Đã có vài đơn vị bán tín chỉ carbon cho khách ngoại nhưng còn thiếu cơ sở pháp lý để đảm bảo hoạt động được thuận lợi, theo chuyên gia. Thông tin tại phiên tọa đàm thuộc Diễn đàn và Triển lãm Kinh tế xanh 2024 (GEFE), ông Đỗ Ngọc Quỳnh, Tổng thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA), cho biết thị trường tín chỉ carbon tự nguyện Việt Nam đã có một số đơn vị bán được tín chỉ carbon cho nhà đầu tư, tập đoàn nước ngoài. "Họ đang mua chứng chỉ carbon và chứng chỉ năng lượng tái tạo (REC) trong tiêu chí RE100, tức 100% năng lượng tái tạo", ông cho biết. RE100 là sáng kiến toàn cầu dành cho các công ty cam kết sử dụng 100% điện năng tái tạo, phát động bởi Climate Group và CDP vào 2014. Từ trái sang, Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) nói tại tọa đàm. Ảnh: GEFE 2024 Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) chia sẻ tại tọa đàm. Ảnh: GEFE 2024 Thị trường carbon gồm hai hình thức là bắt buộc và tự nguyện. Đồ họa: Dỹ Tùng Phân biệt các loại thị trường carbon. Đồ họa: Dỹ Tùng Theo kế hoạch của chính phủ, thị trường bắt buộc sẽ vận hành thử nghiệm vào giai đoạn 2025-2028. Với thị trường tự nguyện, ông Quỳnh cho biết đã bắt đầu hình thành và cũng biến động theo diễn biến xu hướng chung toàn cầu. Chuyên gia VBMA cho rằng Việt Nam đã có chính sách chung để thực hiện cam kết Net Zero vào 2050, nhưng vẫn chưa có pháp lý đầy đủ và rõ ràng cho thị trường carbon tự nguyện. "Những người bán tại Việt Nam sau giao dịch không biết hạch toán vào đâu, nộp thuế thế nào. Một số chọn phương án tính vào thu nhập bất thường để khai thuế", ông ví dụ. Ông Nguyễn Thành Nghiệp, Luật sư thành viên công ty luật VTN và Cộng sự chỉ ra việc chưa có quy định xác định tính chất tài sản của tín chỉ carbon. "Chúng có được xem là tài sản bình thường, được thế chấp hay giao dịch thế nào chưa có đủ căn cứ pháp lý", ông nói. Ngoài ra, quy trình MRV (đo lường, báo cáo và kiểm chứng) cũng cần quy định, hướng dẫn rõ. Theo ông, ngoài các cơ quan quản lý, khu vực tư nhân cũng trông chờ xem liệu có thể tham gia hoạt động MRV không. "Trong thời gian tới, nếu hoàn thiện pháp lý, thị trường sẽ có nhiều tiềm năng phát triển hơn", ông Đỗ Ngọc Quỳnh dự báo. Ngoài tín chỉ carbon, với tiềm năng điện tái tạo thứ tư thế giới theo McKenzie, ông cho rằng có thể khai thác việc vừa bán tín chỉ carbon vừa bán được REC. Theo VBMA, quy mô thị trường carbon bắt buộc toàn cầu đạt 104 tỷ USD năm ngoái, tăng 100% so với năm 2020. Trong khi, thị trường tự nguyện đã thu hẹp còn 800 triệu USD, giảm hai phần ba so với 2021 do một số vụ bê bối liên quan đến "giặt xanh" (green washing) làm ảnh hưởng đến uy tín, niềm tin. Theo dõi biến động của thị trường thế giới giúp các bên tham gia trong thị trường carbon tự nguyện còn sơ khai của Việt Nam rút kinh nghiệm và tìm ra hướng đi. Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS) văn phòng Hà Nội, dự báo người mua sẽ cần tìm kiếm các bên bán tín chỉ có hệ thống quản trị tốt và rõ ràng. Ông cho rằng người mua đang thiên về chuộng mua tín chỉ lĩnh vực giảm phát thải sản xuất vì dễ chứng minh. Một loại được quan tâm khác là "carbon xanh dương" (blue carbon) - tín chỉ tạo ra từ các dự án hấp thụ carbon của rừng ngập mặn, đầm lầy bãi triều và cỏ biển. Ông chỉ ra Việt Nam triển vọng với 200.000 ha rừng ngập mặn, có thể làm các dự án carbon tương tự như ở Honduras. Bà Thu Nguyễn, Quản lý chính sách tại Apanada Management Consultancy, Đại diện Viện Tài nguyên Thế giới (WRI) khuyến nghị các dự án tín chỉ carbon nâng cao giá trị bằng cách quan tâm đến tính bình đẳng và bao trùm. Theo đó, mục tiêu không chỉ là giảm phát thải mà còn là cải thiện đời sống người dân và phát triển bình đẳng hơn "Dự án cần bảo đảm có tham vấn của cộng đồng, đặc biệt là phụ nữ và các nhóm yếu thế, để tạo ra lợi ích cho cả cộng đồng lẫn nhà đầu tư", bà nói."
Đoạn 2: "Giá nhẫn trơn liên tục điều chỉnh, tăng gần một triệu đồng trong ngày và có nơi lên sát 89 triệu đồng một lượng. 15h ngày 23/10, giá mua bán nhẫn trơn được các thương hiệu kinh doanh điều chỉnh theo diễn biến đi lên của thế giới. Chiều nay, mỗi ounce vàng quốc tế tiếp tục thiết lập kỷ lục mới 2.755 USD. Giá nhẫn trơn tại Công ty Vàng bạc đá quý Sài Gòn (SJC) cũng tăng nửa triệu đồng so với đầu sáng và gần 1 triệu đồng so với cuối ngày hôm qua, lên 86,9 - 88,2 triệu đồng. Công ty Vàng bạc đá quý Phú Nhuận (PNJ) và Mi Hồng niêm yết giá nhẫn trơn quanh vùng 87,4 - 88,4 triệu đồng. Còn tại Tập đoàn Vàng bạc đá quý DOJI, giá mua bán nhẫn trơn cùng thời điểm thậm chí lên 88 - 88,9 triệu đồng một lượng. Trước đó đầu ngày, Công ty Vàng bạc đá quý Sài Gòn (SJC) đã tăng 300.000 đồng một lượng so với cuối ngày hôm qua, niêm yết giá nhẫn trơn tại 86,3 - 87,6 triệu đồng. Biểu giá mua bán nhẫn trơn tại Tập đoàn Vàng bạc đá quý DOJI lúc 9h sáng là 87 - 88 triệu đồng, tăng 200.000 đồng so với cuối ngày hôm qua. Nhẫn trơn giữ nhịp tăng liên tục trong 10 ngày qua. So với giữa tháng, mỗi lượng nhẫn trơn đã tăng hơn 5 triệu đồng. Còn so với đầu năm, nhẫn trơn tăng gần 25 triệu một lượng, tương đương hiệu suất 39%. Trong khi giá vàng miếng SJC đứng yên ở vùng 87 - 89 triệu một lượng, do Ngân hàng Nhà nước chưa thay đổi giá bán can thiệp. Thời điểm này là mùa cưới cuối năm và nhu cầu mua vàng nhẫn làm quà cưới tăng, song người dân không dễ để mua được mặt hàng này tại các thương hiệu lớn. Các thương hiệu lớn như DOJI, PNJ, Bảo Tín Minh Châu thường xuyên trong tình trạng cháy hàng. Khách lẻ chỉ may mắn mua được số lượng ít nếu cửa hàng vừa có khách bán ra. Còn tại SJC, các chi nhánh giới hạn lượng mua tối đa 5 phân đến 1 chỉ mỗi người. Trên thị trường quốc tế, mỗi ounce vàng trong 5 ngày qua tăng mạnh hơn 100 USD. Kim loại quý có thời điểm lên mức kỷ lục gần 2.750 USD, trước khi lùi về vùng 2.738 USD vào sáng nay. Quy đổi theo tỷ giá bán Vietcombank, giá vàng trong nước chênh lệch 3,5-5 triệu đồng một lượng so với thế giới. Theo dự báo của các nhà băng hàng đầu thế giới, giá vàng thế giới có thể lên 3.000 USD một ounce vào năm sau. Các chuyên gia khuyến nghị nhà đầu tư phân bổ tỷ trọng nhỏ danh mục vào kênh trú ẩn này, đặc biệt trong bối cảnh kim loại quý đã tăng mạnh thời gian qua."
Đoạn 3: "Nhu cầu trú ẩn khi căng thẳng địa chính trị leo thang kéo giá vàng lên mức đỉnh mới, tại 2.748 USD một ounce. Chốt phiên giao dịch 22/10, giá vàng thế giới giao ngay tăng gần 30 USD lên 2.748 USD một ounce. Đây là mức cao kỷ lục mới của kim loại quý. "Căng thẳng địa chính trị vẫn là nguyên nhân chủ yếu. Hai tuần nữa sẽ diễn ra bầu cử Tổng thống Mỹ và cuộc đua vẫn rất sát sao. Bất ổn chính trị đang kéo nhu cầu trú ẩn lên cao", Peter A. Grant - Phó giám đốc Zaner Metals nhận định trên Reuters. Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Cuộc thăm dò mới nhất của Reuters/Ipsos cho thấy tỷ lệ ủng hộ Phó tổng thống Kamala Harris hiện là 46%, nhỉnh hơn so với 43% của cựu Tổng thống Donald Trump. "Sự sát sao này đang tạo nên tình trạng thiếu chắc chắn. Môi trường này có lợi cho vàng", các nhà phân tích tại ngân hàng BNP Paribas nhận định. Grant dự báo nếu căng thẳng tại Trung Đông tiếp tục tăng nhiệt, giá có thể lên 3.000 USD cuối năm nay. Từ đầu năm, giá đã tăng 33% và liên tiếp lập đỉnh mới. Một yếu tố khác đang hỗ trợ kim loại quý là làn sóng giảm lãi suất của các ngân hàng trung ương lớn trên toàn cầu. Mỹ, châu Âu, Trung Quốc cùng hàng loạt nền kinh tế khác đã giảm lãi suất năm nay để hỗ trợ nền kinh tế. Trong khi đó, tại Wall Street, các chỉ số chính gần như đứng yên. Nhà đầu tư hiện theo dõi lợi suất trái phiếu chính phủ Mỹ và chờ đánh giá thêm báo cáo tài chính của các doanh nghiệp. Ngoài vàng, các kim loại quý khác cũng tăng giá. Bạc lập đỉnh 12 năm, khi tăng 3,2% lên gần 35 USD một ounce. Han Tan - chiến lược gia thị trường tại Exinity Group dự báo bạc vượt mốc 35 USD trước khi cuộc bầu cử diễn ra. Bạch kim đắt thêm 2,8% lên 1.031 USD một ounce. Palladium tăng 2,9% lên 1.081 USD."
'''},
{"role": "user", "content": '''giá nhẫn trơn hôm nay là bao nhiêu?'''}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
# Giá nhẫn trơn hôm nay là 86,9 - 88,2 triệu đồng.
```
<h5> Answer with bot persona</h5>
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.
Nếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch.
Context:
Đoạn 0: "Chính phủ đề xuất bổ sung gần 20.700 tỷ đồng vốn điều lệ cho Ngân hàng Ngoại thương Việt Nam (Vietcombank) từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Chiều 23/10, thừa ủy quyền Chính phủ, Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc trình Quốc hội về bổ sung vốn Nhà nước tại Ngân hàng Ngoại Thương Việt Nam (Vietcombank). Theo đó, Chính phủ đề nghị tăng vốn điều lệ cho ngân hàng này gần 20.700 tỷ đồng từ cổ tức bằng cổ phiếu được chia của cổ đông Nhà nước. Số tiền này lấy từ nguồn lợi nhuận còn lại lũy kế đến hết năm 2018 và lãi còn lại năm 2021. Vốn điều lệ dự kiến rót thêm cho Vietcombank gần bằng lợi nhuận hợp nhất trước thuế nửa đầu năm nay của nhà băng này. Việc bổ sung vốn cho "ông lớn" ngân hàng quốc doanh được Phó thủ tướng nhấn mạnh là cấp thiết để duy trì tỷ lệ vốn góp Nhà nước, phù hợp chiến lược phát triển kinh tế xã hội, tạo nguồn lực hỗ trợ ngân hàng yếu kém. Phó thủ tướng cho biết, phần lợi nhuận còn lại lũy kế hết năm 2018 và lãi còn lại 2021 hiện được hạch toán theo dõi tại VCB, chưa nằm trong cân đối ngân sách Nhà nước. Do vậy, nguồn vốn đề xuất tăng cho ngân hàng này không ảnh hưởng tới kế hoạch dự toán thu chi ngân sách 2024-2025. Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Phó thủ tướng, Bộ trưởng Tài chính Hồ Đức Phớc đọc tờ trình bổ sung vốn cho Vietcombank, ngày 23/10. Ảnh: Trung tâm báo chí Quốc hội Vốn điều lệ của Vietcombank hiện là 55.891 tỷ đồng, thấp hơn nhiều so với VPBank (79.339 tỷ đồng), Techcombank (70.450 tỷ đồng) và không có sự cách biệt lớn so với một số ngân hàng thương mại cổ phần như MB (52.871) tỷ đồng, ACB (44.667 tỷ đồng) và SHB (36.629 tỷ đồng). Ngoài ra, việc tăng vốn nhằm để ngân hàng này đáp ứng các tỷ lệ an toàn tối thiểu. Tính tới cuối 2023, tỷ lệ an toàn vốn (CAR) của ngân hàng này là 11,05%, đảm bảo quy định. Tuy nhiên, mức này thấp hơn các ngân hàng thương mại cổ phần (VPBank, MB là 12-13%; Techcombank 13-15%...) và các nhà băng trong khu vực (Singapore là 17,1%, Indonesia 23,27%...). Thẩm tra nội dung này, Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh cho rằng đề xuất tăng vốn cho Vietcombank bảo đảm cơ sở pháp lý và đúng thẩm quyền theo quy định. Tuy nhiên, Ủy ban Kinh tế đề nghị Chính phủ lấy ý kiến của cổ đông chiến lược nước ngoài Ngân hàng Mizuho Corporate Bank - đơn vị nắm 15% vốn điều lệ của Vietcombank. Việc này nhằm thuận lợi trong quá trình tăng vốn. Chính phủ cũng cần bổ sung thông tin hiện trạng vốn của Vietcombank so với các ngân hàng thương mại trong hệ thống hiện nay. "Có ý kiến đề nghị làm rõ nhận định nguồn vốn đề xuất để tăng vốn điều lệ không tác động đến ngân sách Nhà nước", ông Thanh cho biết. Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Trụ sở Ngân hàng Ngoại thương Việt Nam (Vietcombank). Ảnh: VCB Chủ nhiệm Ủy ban Kinh tế Vũ Hồng Thanh đề nghị Chính phủ chỉ đạo Ngân hàng Nhà nước cùng các bộ, ngành liên quan xử lý phần lợi nhuận còn lại năm 2022, 2023 (lần lượt là 21.680 tỷ và 25.009 tỷ đồng), nhằm tăng năng lực tài chính cho Vietcombank, bù đắp mức thiếu hụt vốn tự có, bảo đảm an toàn hoạt động. Cơ quan thẩm tra lưu ý vốn được bổ sung cho Vietcombank cần được dùng để mở rộng kinh doanh, cung ứng tín dụng với các lĩnh vực, dự án quan trọng quốc gia quy mô lớn, giảm lãi suất cho vay, cũng như đổi mới mô hình quản trị, chất lượng dịch vụ của nhà băng này. "Chính phủ cần đánh giá kỹ tác động việc bổ sung vốn Nhà nước cho Vietcombank tới phát triển của ngành ngân hàng, hiệu quả kinh tế xã hội", Ủy ban Kinh tế lưu ý. Vietcombank là một trong 4 ngân hàng thương mại Nhà nước, bên cạnh BIDV, VietinBank và Agribank. Ngân hàng này do Nhà nước sở hữu 74,8% vốn điều lệ. Lũy kế nửa đầu năm nay, lợi nhuận hợp nhất trước thuế của nhà băng này đạt 20.835 tỷ đồng, tăng 1,6% so với cùng kỳ 2023. Với dữ liệu này, Vietcombank tiếp tục đứng đầu toàn hệ thống ngân hàng về lợi nhuận 6 tháng đầu năm. Đây cũng là mức lãi nửa đầu năm cao kỷ lục của nhà băng này. Tính đến 30/6, tổng tài sản của ngân hàng đạt hơn 1,9 triệu tỷ đồng, tăng 3,6% so với cuối 2023. Trong đó, cho vay khách hàng gần 1,37 triệu tỷ đồng, tăng 7,8%."
Đoạn 1: "Đã có vài đơn vị bán tín chỉ carbon cho khách ngoại nhưng còn thiếu cơ sở pháp lý để đảm bảo hoạt động được thuận lợi, theo chuyên gia. Thông tin tại phiên tọa đàm thuộc Diễn đàn và Triển lãm Kinh tế xanh 2024 (GEFE), ông Đỗ Ngọc Quỳnh, Tổng thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA), cho biết thị trường tín chỉ carbon tự nguyện Việt Nam đã có một số đơn vị bán được tín chỉ carbon cho nhà đầu tư, tập đoàn nước ngoài. "Họ đang mua chứng chỉ carbon và chứng chỉ năng lượng tái tạo (REC) trong tiêu chí RE100, tức 100% năng lượng tái tạo", ông cho biết. RE100 là sáng kiến toàn cầu dành cho các công ty cam kết sử dụng 100% điện năng tái tạo, phát động bởi Climate Group và CDP vào 2014. Từ trái sang, Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) nói tại tọa đàm. Ảnh: GEFE 2024 Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS Hà Nội) và ông Đỗ Ngọc Quỳnh, Tổng Thư ký Hiệp hội Thị trường Trái phiếu Việt Nam (VBMA) chia sẻ tại tọa đàm. Ảnh: GEFE 2024 Thị trường carbon gồm hai hình thức là bắt buộc và tự nguyện. Đồ họa: Dỹ Tùng Phân biệt các loại thị trường carbon. Đồ họa: Dỹ Tùng Theo kế hoạch của chính phủ, thị trường bắt buộc sẽ vận hành thử nghiệm vào giai đoạn 2025-2028. Với thị trường tự nguyện, ông Quỳnh cho biết đã bắt đầu hình thành và cũng biến động theo diễn biến xu hướng chung toàn cầu. Chuyên gia VBMA cho rằng Việt Nam đã có chính sách chung để thực hiện cam kết Net Zero vào 2050, nhưng vẫn chưa có pháp lý đầy đủ và rõ ràng cho thị trường carbon tự nguyện. "Những người bán tại Việt Nam sau giao dịch không biết hạch toán vào đâu, nộp thuế thế nào. Một số chọn phương án tính vào thu nhập bất thường để khai thuế", ông ví dụ. Ông Nguyễn Thành Nghiệp, Luật sư thành viên công ty luật VTN và Cộng sự chỉ ra việc chưa có quy định xác định tính chất tài sản của tín chỉ carbon. "Chúng có được xem là tài sản bình thường, được thế chấp hay giao dịch thế nào chưa có đủ căn cứ pháp lý", ông nói. Ngoài ra, quy trình MRV (đo lường, báo cáo và kiểm chứng) cũng cần quy định, hướng dẫn rõ. Theo ông, ngoài các cơ quan quản lý, khu vực tư nhân cũng trông chờ xem liệu có thể tham gia hoạt động MRV không. "Trong thời gian tới, nếu hoàn thiện pháp lý, thị trường sẽ có nhiều tiềm năng phát triển hơn", ông Đỗ Ngọc Quỳnh dự báo. Ngoài tín chỉ carbon, với tiềm năng điện tái tạo thứ tư thế giới theo McKenzie, ông cho rằng có thể khai thác việc vừa bán tín chỉ carbon vừa bán được REC. Theo VBMA, quy mô thị trường carbon bắt buộc toàn cầu đạt 104 tỷ USD năm ngoái, tăng 100% so với năm 2020. Trong khi, thị trường tự nguyện đã thu hẹp còn 800 triệu USD, giảm hai phần ba so với 2021 do một số vụ bê bối liên quan đến "giặt xanh" (green washing) làm ảnh hưởng đến uy tín, niềm tin. Theo dõi biến động của thị trường thế giới giúp các bên tham gia trong thị trường carbon tự nguyện còn sơ khai của Việt Nam rút kinh nghiệm và tìm ra hướng đi. Marco Gaspari, Điều phối viên Ngành Môi trường tại Cơ quan Hợp tác Phát triển Italy (AICS) văn phòng Hà Nội, dự báo người mua sẽ cần tìm kiếm các bên bán tín chỉ có hệ thống quản trị tốt và rõ ràng. Ông cho rằng người mua đang thiên về chuộng mua tín chỉ lĩnh vực giảm phát thải sản xuất vì dễ chứng minh. Một loại được quan tâm khác là "carbon xanh dương" (blue carbon) - tín chỉ tạo ra từ các dự án hấp thụ carbon của rừng ngập mặn, đầm lầy bãi triều và cỏ biển. Ông chỉ ra Việt Nam triển vọng với 200.000 ha rừng ngập mặn, có thể làm các dự án carbon tương tự như ở Honduras. Bà Thu Nguyễn, Quản lý chính sách tại Apanada Management Consultancy, Đại diện Viện Tài nguyên Thế giới (WRI) khuyến nghị các dự án tín chỉ carbon nâng cao giá trị bằng cách quan tâm đến tính bình đẳng và bao trùm. Theo đó, mục tiêu không chỉ là giảm phát thải mà còn là cải thiện đời sống người dân và phát triển bình đẳng hơn "Dự án cần bảo đảm có tham vấn của cộng đồng, đặc biệt là phụ nữ và các nhóm yếu thế, để tạo ra lợi ích cho cả cộng đồng lẫn nhà đầu tư", bà nói."
Đoạn 2: "Giá nhẫn trơn liên tục điều chỉnh, tăng gần một triệu đồng trong ngày và có nơi lên sát 89 triệu đồng một lượng. 15h ngày 23/10, giá mua bán nhẫn trơn được các thương hiệu kinh doanh điều chỉnh theo diễn biến đi lên của thế giới. Chiều nay, mỗi ounce vàng quốc tế tiếp tục thiết lập kỷ lục mới 2.755 USD. Giá nhẫn trơn tại Công ty Vàng bạc đá quý Sài Gòn (SJC) cũng tăng nửa triệu đồng so với đầu sáng và gần 1 triệu đồng so với cuối ngày hôm qua, lên 86,9 - 88,2 triệu đồng. Công ty Vàng bạc đá quý Phú Nhuận (PNJ) và Mi Hồng niêm yết giá nhẫn trơn quanh vùng 87,4 - 88,4 triệu đồng. Còn tại Tập đoàn Vàng bạc đá quý DOJI, giá mua bán nhẫn trơn cùng thời điểm thậm chí lên 88 - 88,9 triệu đồng một lượng. Trước đó đầu ngày, Công ty Vàng bạc đá quý Sài Gòn (SJC) đã tăng 300.000 đồng một lượng so với cuối ngày hôm qua, niêm yết giá nhẫn trơn tại 86,3 - 87,6 triệu đồng. Biểu giá mua bán nhẫn trơn tại Tập đoàn Vàng bạc đá quý DOJI lúc 9h sáng là 87 - 88 triệu đồng, tăng 200.000 đồng so với cuối ngày hôm qua. Nhẫn trơn giữ nhịp tăng liên tục trong 10 ngày qua. So với giữa tháng, mỗi lượng nhẫn trơn đã tăng hơn 5 triệu đồng. Còn so với đầu năm, nhẫn trơn tăng gần 25 triệu một lượng, tương đương hiệu suất 39%. Trong khi giá vàng miếng SJC đứng yên ở vùng 87 - 89 triệu một lượng, do Ngân hàng Nhà nước chưa thay đổi giá bán can thiệp. Thời điểm này là mùa cưới cuối năm và nhu cầu mua vàng nhẫn làm quà cưới tăng, song người dân không dễ để mua được mặt hàng này tại các thương hiệu lớn. Các thương hiệu lớn như DOJI, PNJ, Bảo Tín Minh Châu thường xuyên trong tình trạng cháy hàng. Khách lẻ chỉ may mắn mua được số lượng ít nếu cửa hàng vừa có khách bán ra. Còn tại SJC, các chi nhánh giới hạn lượng mua tối đa 5 phân đến 1 chỉ mỗi người. Trên thị trường quốc tế, mỗi ounce vàng trong 5 ngày qua tăng mạnh hơn 100 USD. Kim loại quý có thời điểm lên mức kỷ lục gần 2.750 USD, trước khi lùi về vùng 2.738 USD vào sáng nay. Quy đổi theo tỷ giá bán Vietcombank, giá vàng trong nước chênh lệch 3,5-5 triệu đồng một lượng so với thế giới. Theo dự báo của các nhà băng hàng đầu thế giới, giá vàng thế giới có thể lên 3.000 USD một ounce vào năm sau. Các chuyên gia khuyến nghị nhà đầu tư phân bổ tỷ trọng nhỏ danh mục vào kênh trú ẩn này, đặc biệt trong bối cảnh kim loại quý đã tăng mạnh thời gian qua."
Đoạn 3: "Nhu cầu trú ẩn khi căng thẳng địa chính trị leo thang kéo giá vàng lên mức đỉnh mới, tại 2.748 USD một ounce. Chốt phiên giao dịch 22/10, giá vàng thế giới giao ngay tăng gần 30 USD lên 2.748 USD một ounce. Đây là mức cao kỷ lục mới của kim loại quý. "Căng thẳng địa chính trị vẫn là nguyên nhân chủ yếu. Hai tuần nữa sẽ diễn ra bầu cử Tổng thống Mỹ và cuộc đua vẫn rất sát sao. Bất ổn chính trị đang kéo nhu cầu trú ẩn lên cao", Peter A. Grant - Phó giám đốc Zaner Metals nhận định trên Reuters. Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Giá vàng thế giới đảo chiều tăng mạnh trong phiên 22/10. Đồ thị: Kitco Cuộc thăm dò mới nhất của Reuters/Ipsos cho thấy tỷ lệ ủng hộ Phó tổng thống Kamala Harris hiện là 46%, nhỉnh hơn so với 43% của cựu Tổng thống Donald Trump. "Sự sát sao này đang tạo nên tình trạng thiếu chắc chắn. Môi trường này có lợi cho vàng", các nhà phân tích tại ngân hàng BNP Paribas nhận định. Grant dự báo nếu căng thẳng tại Trung Đông tiếp tục tăng nhiệt, giá có thể lên 3.000 USD cuối năm nay. Từ đầu năm, giá đã tăng 33% và liên tiếp lập đỉnh mới. Một yếu tố khác đang hỗ trợ kim loại quý là làn sóng giảm lãi suất của các ngân hàng trung ương lớn trên toàn cầu. Mỹ, châu Âu, Trung Quốc cùng hàng loạt nền kinh tế khác đã giảm lãi suất năm nay để hỗ trợ nền kinh tế. Trong khi đó, tại Wall Street, các chỉ số chính gần như đứng yên. Nhà đầu tư hiện theo dõi lợi suất trái phiếu chính phủ Mỹ và chờ đánh giá thêm báo cáo tài chính của các doanh nghiệp. Ngoài vàng, các kim loại quý khác cũng tăng giá. Bạc lập đỉnh 12 năm, khi tăng 3,2% lên gần 35 USD một ounce. Han Tan - chiến lược gia thị trường tại Exinity Group dự báo bạc vượt mốc 35 USD trước khi cuộc bầu cử diễn ra. Bạch kim đắt thêm 2,8% lên 1.031 USD một ounce. Palladium tăng 2,9% lên 1.081 USD."
'''},
{"role": "user", "content": '''Hãy trả lời câu hỏi sau dựa vào đoạn ngữ cảnh được cung cấp. Câu trả lời phải có thưa gửi rõ ràng, xưng là em và kính thưa quý khách.\nCâu hỏi: giá nhẫn trơn hôm nay là bao nhiêu?'''}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
# Em xin thông báo rằng giá nhẫn trơn hôm nay dao động từ 86,9 đến 88,2 triệu đồng một ounce, tùy thuộc vào từng thương hiệu.
```
***You can customize the prompt before the answer to get a response that suits your needs.***
***You can also add information about this bot's persona in the system prompt.***
<h4> 3. Function Calling task </h4>
***In this task, we are following the Function Calling template from Glaive AI: [glaiveai/glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2).***
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lý hữu ích với khả năng truy cập vào các hàm sau. Hãy sử dụng chúng nếu cần -
{
"name": "weather_forecast",
"description": "Cung cấp cập nhật và dự báo thời tiết cho các địa điểm cụ thể, bao gồm nhiệt độ, độ ẩm và tình trạng thời tiết. Ví dụ: thời tiết hôm nay, dự báo thời tiết ở Hà Nội, nhiệt độ tại Đà Nẵng, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "news_update",
"description": "Cung cấp các bài báo và cập nhật tin tức mới nhất trên nhiều lĩnh vực như chính trị, công nghệ, thể thao và giải trí. Ví dụ: tin tức hôm nay, cập nhật thể thao, tin công nghệ mới nhất, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "recipe_search",
"description": "Tìm kiếm và gợi ý công thức nấu ăn dựa trên nguyên liệu hoặc sở thích dinh dưỡng. Ví dụ: công thức món ăn với gà, món chay, ăn kiêng, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "movie_recommendation",
"description": "Cung cấp gợi ý phim dựa trên thể loại, tâm trạng hoặc tiêu đề cụ thể. Ví dụ: phim hài hay, phim hành động mới, gợi ý phim cho tối nay, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "fitness_advice",
"description": "Cung cấp mẹo và bài tập cho sức khỏe và thể dục dựa trên mục tiêu của người dùng. Ví dụ: bài tập giảm cân, lịch tập gym cho người mới, lời khuyên về dinh dưỡng, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
},
{
"name": "travel_planner",
"description": "Hỗ trợ lập kế hoạch du lịch, bao gồm gợi ý lịch trình, mẹo đặt vé và thông tin về điểm đến. Ví dụ: lên kế hoạch du lịch Đà Nẵng, địa điểm tham quan ở Nha Trang, mẹo du lịch Nhật Bản, v.v.",
"parameters": {
"properties": {
"__arg1": {
"description": "__arg1",
"type": "string"
}
},
"required": [
"__arg1"
],
"type": "object"
}
}'''},
{"role": "user", "content": "hôm nay có tin tức gì về ô nhiễm không khí"}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=128)
print(tokenizer.decode(outputs[0]))
# <functioncall> {"name": "news_update", "arguments": {"__arg1": "ô nhiễm không khí"}}
```
<h4> 4. Question Rewriting task </h4>
```python
messages = [
{"role": "system", "content": '''Bạn là một trợ lí Tiếng Việt nhiệt tình và trung thực. Hãy luôn trả lời một cách hữu ích nhất có thể, đồng thời giữ an toàn.\nNếu một câu hỏi không có ý nghĩa hoặc không hợp lý về mặt thông tin, hãy giải thích tại sao thay vì trả lời một điều gì đó không chính xác, vui lòng không chia sẻ thông tin sai lệch.
Context:
USER: Concert "Anh trai vượt ngàn chông gai" diễn ra ở đâu?
ASSISTANT: Đêm concert Anh trai vượt ngàn chông gai diễn ra vào tối 19-10, tại công viên bờ sông Sài Gòn, TP.HCM với sự tham dự của khoảng 20.000 khán giả.
USER: Buổi biểu diễn có bao nhiêu nghệ sĩ?
ASSISTANT: Danh sách nghệ sĩ gồm 17 nghệ sĩ: Kay Trần, Jun Phạm, Cường Seven, Tự Long, (S)TRONG Trọng Hiếu, S.T Sơn Thạch, BB Trần, Bùi Công Nam, Rhymastic, Soobin Hoàng Sơn, Binz, Quốc Thiên, Thanh Duy, Bằng Kiều, Tiến Luật, Đinh Tiến Đạt và Đỗ Hoàng Hiệp.
USER: Chương trình phát sóng khi nào và ở đâu?
ASSISTANT: chương trình phát sóng từ ngày 29 tháng 6 năm 2024 lúc 20:00 thứ 7 hàng tuần trên VTV3 và công chiếu lúc 20:30 cùng ngày trên kênh YouTube YeaH1 Show của nhà sản xuất chương trình.'''},
{"role": "user", "content": '''Dựa vào đoạn hội thoại được cung cấp, viết lại câu nói của người dùng sao cho đầu đủ ý nhất có thể mà không bị sai lệch thông tin.
Câu nói: Concert này có tổ chức ở Hà Nội không?
'''}]
tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(tokenized_chat, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
# Buổi hòa nhạc Anh trai vượt ngàn chông gai có diễn ra ở Hà Nội không?
```
***Modify the parameters "temperature", "top_k", "top_p" to suit your usecase.***
Corresponding Author:
+ [email protected]
| [
"SUMMARIZATION"
] | [
"CHIA"
] | Non_BioNLP |
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-99421248 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-99421248",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,716 | 1,716 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-99421248
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-99421248',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
| [
"TEXT_CLASSIFICATION"
] | [
"SCIFACT"
] | Non_BioNLP |
McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-unsup-simcse | McGill-NLP | sentence-similarity | [
"peft",
"safetensors",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2404.05961",
"license:mit",
"model-index",
"region:us"
] | 1,712 | 1,712 | 338 | 7 | ---
language:
- en
library_name: peft
license: mit
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- text-reranking
- feature-extraction
- sentence-similarity
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
model-index:
- name: LLM2Vec-Mistral-7B-unsupervised
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.94029850746269
- type: ap
value: 41.01055096636703
- type: f1
value: 71.2582580801963
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 85.288275
- type: ap
value: 80.9174293931393
- type: f1
value: 85.26284279319103
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.089999999999996
- type: f1
value: 46.42571856588491
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.676
- type: map_at_10
value: 41.705999999999996
- type: map_at_100
value: 42.649
- type: map_at_1000
value: 42.655
- type: map_at_3
value: 36.214
- type: map_at_5
value: 39.475
- type: mrr_at_1
value: 26.173999999999996
- type: mrr_at_10
value: 41.873
- type: mrr_at_100
value: 42.817
- type: mrr_at_1000
value: 42.823
- type: mrr_at_3
value: 36.427
- type: mrr_at_5
value: 39.646
- type: ndcg_at_1
value: 25.676
- type: ndcg_at_10
value: 51.001
- type: ndcg_at_100
value: 55.001
- type: ndcg_at_1000
value: 55.167
- type: ndcg_at_3
value: 39.713
- type: ndcg_at_5
value: 45.596
- type: precision_at_1
value: 25.676
- type: precision_at_10
value: 8.087
- type: precision_at_100
value: 0.983
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.619
- type: precision_at_5
value: 12.831000000000001
- type: recall_at_1
value: 25.676
- type: recall_at_10
value: 80.868
- type: recall_at_100
value: 98.29299999999999
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 49.858000000000004
- type: recall_at_5
value: 64.154
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.557333278165295
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.921940994207674
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.602773795071585
- type: mrr
value: 72.93749725190169
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 83.29045204631967
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.1590909090909
- type: f1
value: 86.08993054539444
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 36.13784714320738
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.26284987791574
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: cqadupstack/android
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.611
- type: map_at_10
value: 37.838
- type: map_at_100
value: 39.446999999999996
- type: map_at_1000
value: 39.583
- type: map_at_3
value: 34.563
- type: map_at_5
value: 36.332
- type: mrr_at_1
value: 35.765
- type: mrr_at_10
value: 44.614
- type: mrr_at_100
value: 45.501000000000005
- type: mrr_at_1000
value: 45.558
- type: mrr_at_3
value: 42.513
- type: mrr_at_5
value: 43.515
- type: ndcg_at_1
value: 35.765
- type: ndcg_at_10
value: 44.104
- type: ndcg_at_100
value: 50.05500000000001
- type: ndcg_at_1000
value: 52.190000000000005
- type: ndcg_at_3
value: 39.834
- type: ndcg_at_5
value: 41.491
- type: precision_at_1
value: 35.765
- type: precision_at_10
value: 8.870000000000001
- type: precision_at_100
value: 1.505
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 19.886
- type: precision_at_5
value: 14.277999999999999
- type: recall_at_1
value: 27.611
- type: recall_at_10
value: 55.065
- type: recall_at_100
value: 80.60199999999999
- type: recall_at_1000
value: 94.517
- type: recall_at_3
value: 41.281
- type: recall_at_5
value: 46.791
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: cqadupstack/english
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.599999999999998
- type: map_at_10
value: 38.218999999999994
- type: map_at_100
value: 39.336
- type: map_at_1000
value: 39.464
- type: map_at_3
value: 35.599
- type: map_at_5
value: 36.927
- type: mrr_at_1
value: 37.197
- type: mrr_at_10
value: 44.759
- type: mrr_at_100
value: 45.372
- type: mrr_at_1000
value: 45.422000000000004
- type: mrr_at_3
value: 42.941
- type: mrr_at_5
value: 43.906
- type: ndcg_at_1
value: 37.197
- type: ndcg_at_10
value: 43.689
- type: ndcg_at_100
value: 47.588
- type: ndcg_at_1000
value: 49.868
- type: ndcg_at_3
value: 40.434
- type: ndcg_at_5
value: 41.617
- type: precision_at_1
value: 37.197
- type: precision_at_10
value: 8.529
- type: precision_at_100
value: 1.325
- type: precision_at_1000
value: 0.181
- type: precision_at_3
value: 20.212
- type: precision_at_5
value: 13.987
- type: recall_at_1
value: 28.599999999999998
- type: recall_at_10
value: 52.266999999999996
- type: recall_at_100
value: 69.304
- type: recall_at_1000
value: 84.249
- type: recall_at_3
value: 41.281
- type: recall_at_5
value: 45.56
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: cqadupstack/gaming
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.168
- type: map_at_10
value: 44.690999999999995
- type: map_at_100
value: 45.804
- type: map_at_1000
value: 45.876
- type: map_at_3
value: 41.385
- type: map_at_5
value: 43.375
- type: mrr_at_1
value: 38.997
- type: mrr_at_10
value: 48.782
- type: mrr_at_100
value: 49.534
- type: mrr_at_1000
value: 49.57
- type: mrr_at_3
value: 46.134
- type: mrr_at_5
value: 47.814
- type: ndcg_at_1
value: 38.997
- type: ndcg_at_10
value: 50.707
- type: ndcg_at_100
value: 55.358
- type: ndcg_at_1000
value: 56.818999999999996
- type: ndcg_at_3
value: 45.098
- type: ndcg_at_5
value: 48.065999999999995
- type: precision_at_1
value: 38.997
- type: precision_at_10
value: 8.414000000000001
- type: precision_at_100
value: 1.159
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 20.564
- type: precision_at_5
value: 14.445
- type: recall_at_1
value: 33.168
- type: recall_at_10
value: 64.595
- type: recall_at_100
value: 85.167
- type: recall_at_1000
value: 95.485
- type: recall_at_3
value: 49.555
- type: recall_at_5
value: 56.871
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: cqadupstack/gis
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.254
- type: map_at_10
value: 23.925
- type: map_at_100
value: 25.008000000000003
- type: map_at_1000
value: 25.123
- type: map_at_3
value: 21.676000000000002
- type: map_at_5
value: 23.042
- type: mrr_at_1
value: 18.756999999999998
- type: mrr_at_10
value: 25.578
- type: mrr_at_100
value: 26.594
- type: mrr_at_1000
value: 26.680999999999997
- type: mrr_at_3
value: 23.371
- type: mrr_at_5
value: 24.721
- type: ndcg_at_1
value: 18.756999999999998
- type: ndcg_at_10
value: 27.878999999999998
- type: ndcg_at_100
value: 33.285
- type: ndcg_at_1000
value: 36.333
- type: ndcg_at_3
value: 23.461000000000002
- type: ndcg_at_5
value: 25.836
- type: precision_at_1
value: 18.756999999999998
- type: precision_at_10
value: 4.429
- type: precision_at_100
value: 0.754
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 9.981
- type: precision_at_5
value: 7.412000000000001
- type: recall_at_1
value: 17.254
- type: recall_at_10
value: 38.42
- type: recall_at_100
value: 63.50900000000001
- type: recall_at_1000
value: 86.787
- type: recall_at_3
value: 26.840999999999998
- type: recall_at_5
value: 32.4
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: cqadupstack/mathematica
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.495000000000001
- type: map_at_10
value: 16.505
- type: map_at_100
value: 17.59
- type: map_at_1000
value: 17.709
- type: map_at_3
value: 13.974
- type: map_at_5
value: 15.466
- type: mrr_at_1
value: 14.179
- type: mrr_at_10
value: 20.396
- type: mrr_at_100
value: 21.384
- type: mrr_at_1000
value: 21.47
- type: mrr_at_3
value: 17.828
- type: mrr_at_5
value: 19.387999999999998
- type: ndcg_at_1
value: 14.179
- type: ndcg_at_10
value: 20.852
- type: ndcg_at_100
value: 26.44
- type: ndcg_at_1000
value: 29.448999999999998
- type: ndcg_at_3
value: 16.181
- type: ndcg_at_5
value: 18.594
- type: precision_at_1
value: 14.179
- type: precision_at_10
value: 4.229
- type: precision_at_100
value: 0.8170000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 8.126
- type: precision_at_5
value: 6.493
- type: recall_at_1
value: 10.495000000000001
- type: recall_at_10
value: 30.531000000000002
- type: recall_at_100
value: 55.535999999999994
- type: recall_at_1000
value: 77.095
- type: recall_at_3
value: 17.805
- type: recall_at_5
value: 24.041
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: cqadupstack/physics
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.826999999999998
- type: map_at_10
value: 34.957
- type: map_at_100
value: 36.314
- type: map_at_1000
value: 36.437999999999995
- type: map_at_3
value: 31.328
- type: map_at_5
value: 33.254
- type: mrr_at_1
value: 31.375999999999998
- type: mrr_at_10
value: 40.493
- type: mrr_at_100
value: 41.410000000000004
- type: mrr_at_1000
value: 41.46
- type: mrr_at_3
value: 37.504
- type: mrr_at_5
value: 39.212
- type: ndcg_at_1
value: 31.375999999999998
- type: ndcg_at_10
value: 41.285
- type: ndcg_at_100
value: 46.996
- type: ndcg_at_1000
value: 49.207
- type: ndcg_at_3
value: 35.297
- type: ndcg_at_5
value: 37.999
- type: precision_at_1
value: 31.375999999999998
- type: precision_at_10
value: 7.960000000000001
- type: precision_at_100
value: 1.277
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 17.132
- type: precision_at_5
value: 12.57
- type: recall_at_1
value: 24.826999999999998
- type: recall_at_10
value: 54.678000000000004
- type: recall_at_100
value: 78.849
- type: recall_at_1000
value: 93.36
- type: recall_at_3
value: 37.775
- type: recall_at_5
value: 44.993
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: cqadupstack/programmers
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.195
- type: map_at_10
value: 29.003
- type: map_at_100
value: 30.379
- type: map_at_1000
value: 30.508000000000003
- type: map_at_3
value: 25.927
- type: map_at_5
value: 27.784
- type: mrr_at_1
value: 26.941
- type: mrr_at_10
value: 34.305
- type: mrr_at_100
value: 35.32
- type: mrr_at_1000
value: 35.386
- type: mrr_at_3
value: 31.735000000000003
- type: mrr_at_5
value: 33.213
- type: ndcg_at_1
value: 26.941
- type: ndcg_at_10
value: 34.31
- type: ndcg_at_100
value: 40.242
- type: ndcg_at_1000
value: 42.9
- type: ndcg_at_3
value: 29.198
- type: ndcg_at_5
value: 31.739
- type: precision_at_1
value: 26.941
- type: precision_at_10
value: 6.507000000000001
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 13.850999999999999
- type: precision_at_5
value: 10.411
- type: recall_at_1
value: 21.195
- type: recall_at_10
value: 45.083
- type: recall_at_100
value: 70.14200000000001
- type: recall_at_1000
value: 88.34100000000001
- type: recall_at_3
value: 31.175000000000004
- type: recall_at_5
value: 37.625
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.293916666666664
- type: map_at_10
value: 28.353666666666665
- type: map_at_100
value: 29.524333333333335
- type: map_at_1000
value: 29.652583333333332
- type: map_at_3
value: 25.727916666666665
- type: map_at_5
value: 27.170833333333334
- type: mrr_at_1
value: 25.21375
- type: mrr_at_10
value: 32.67591666666667
- type: mrr_at_100
value: 33.56233333333334
- type: mrr_at_1000
value: 33.63283333333334
- type: mrr_at_3
value: 30.415333333333333
- type: mrr_at_5
value: 31.679583333333333
- type: ndcg_at_1
value: 25.21375
- type: ndcg_at_10
value: 33.37108333333333
- type: ndcg_at_100
value: 38.57725
- type: ndcg_at_1000
value: 41.246833333333335
- type: ndcg_at_3
value: 28.98183333333334
- type: ndcg_at_5
value: 30.986083333333337
- type: precision_at_1
value: 25.21375
- type: precision_at_10
value: 6.200583333333333
- type: precision_at_100
value: 1.0527499999999999
- type: precision_at_1000
value: 0.14675000000000002
- type: precision_at_3
value: 13.808333333333334
- type: precision_at_5
value: 9.976416666666669
- type: recall_at_1
value: 20.293916666666664
- type: recall_at_10
value: 43.90833333333333
- type: recall_at_100
value: 67.26575
- type: recall_at_1000
value: 86.18591666666666
- type: recall_at_3
value: 31.387416666666667
- type: recall_at_5
value: 36.73883333333333
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: cqadupstack/stats
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.043000000000001
- type: map_at_10
value: 22.203
- type: map_at_100
value: 23.254
- type: map_at_1000
value: 23.362
- type: map_at_3
value: 20.157
- type: map_at_5
value: 21.201999999999998
- type: mrr_at_1
value: 17.485
- type: mrr_at_10
value: 24.729
- type: mrr_at_100
value: 25.715
- type: mrr_at_1000
value: 25.796999999999997
- type: mrr_at_3
value: 22.725
- type: mrr_at_5
value: 23.829
- type: ndcg_at_1
value: 17.485
- type: ndcg_at_10
value: 26.31
- type: ndcg_at_100
value: 31.722
- type: ndcg_at_1000
value: 34.621
- type: ndcg_at_3
value: 22.414
- type: ndcg_at_5
value: 24.125
- type: precision_at_1
value: 17.485
- type: precision_at_10
value: 4.601
- type: precision_at_100
value: 0.7849999999999999
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 10.327
- type: precision_at_5
value: 7.331
- type: recall_at_1
value: 15.043000000000001
- type: recall_at_10
value: 36.361
- type: recall_at_100
value: 61.63999999999999
- type: recall_at_1000
value: 83.443
- type: recall_at_3
value: 25.591
- type: recall_at_5
value: 29.808
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: cqadupstack/tex
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.018
- type: map_at_10
value: 15.886
- type: map_at_100
value: 16.830000000000002
- type: map_at_1000
value: 16.956
- type: map_at_3
value: 14.222000000000001
- type: map_at_5
value: 15.110999999999999
- type: mrr_at_1
value: 14.625
- type: mrr_at_10
value: 19.677
- type: mrr_at_100
value: 20.532
- type: mrr_at_1000
value: 20.622
- type: mrr_at_3
value: 17.992
- type: mrr_at_5
value: 18.909000000000002
- type: ndcg_at_1
value: 14.625
- type: ndcg_at_10
value: 19.414
- type: ndcg_at_100
value: 24.152
- type: ndcg_at_1000
value: 27.433000000000003
- type: ndcg_at_3
value: 16.495
- type: ndcg_at_5
value: 17.742
- type: precision_at_1
value: 14.625
- type: precision_at_10
value: 3.833
- type: precision_at_100
value: 0.744
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 8.213
- type: precision_at_5
value: 6.036
- type: recall_at_1
value: 11.018
- type: recall_at_10
value: 26.346000000000004
- type: recall_at_100
value: 47.99
- type: recall_at_1000
value: 72.002
- type: recall_at_3
value: 17.762
- type: recall_at_5
value: 21.249000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: cqadupstack/unix
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.053
- type: map_at_10
value: 27.950000000000003
- type: map_at_100
value: 29.207
- type: map_at_1000
value: 29.309
- type: map_at_3
value: 25.612000000000002
- type: map_at_5
value: 26.793
- type: mrr_at_1
value: 24.813
- type: mrr_at_10
value: 32.297
- type: mrr_at_100
value: 33.312999999999995
- type: mrr_at_1000
value: 33.379999999999995
- type: mrr_at_3
value: 30.239
- type: mrr_at_5
value: 31.368000000000002
- type: ndcg_at_1
value: 24.813
- type: ndcg_at_10
value: 32.722
- type: ndcg_at_100
value: 38.603
- type: ndcg_at_1000
value: 41.11
- type: ndcg_at_3
value: 28.74
- type: ndcg_at_5
value: 30.341
- type: precision_at_1
value: 24.813
- type: precision_at_10
value: 5.83
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 13.433
- type: precision_at_5
value: 9.384
- type: recall_at_1
value: 20.053
- type: recall_at_10
value: 42.867
- type: recall_at_100
value: 68.90899999999999
- type: recall_at_1000
value: 87.031
- type: recall_at_3
value: 31.606
- type: recall_at_5
value: 35.988
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: cqadupstack/webmasters
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.696
- type: map_at_10
value: 29.741
- type: map_at_100
value: 30.958999999999996
- type: map_at_1000
value: 31.22
- type: map_at_3
value: 26.679000000000002
- type: map_at_5
value: 28.244999999999997
- type: mrr_at_1
value: 27.272999999999996
- type: mrr_at_10
value: 35.101
- type: mrr_at_100
value: 35.91
- type: mrr_at_1000
value: 35.987
- type: mrr_at_3
value: 32.378
- type: mrr_at_5
value: 33.732
- type: ndcg_at_1
value: 27.272999999999996
- type: ndcg_at_10
value: 36.136
- type: ndcg_at_100
value: 40.9
- type: ndcg_at_1000
value: 44.184
- type: ndcg_at_3
value: 31.123
- type: ndcg_at_5
value: 33.182
- type: precision_at_1
value: 27.272999999999996
- type: precision_at_10
value: 7.489999999999999
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 15.348999999999998
- type: precision_at_5
value: 11.344
- type: recall_at_1
value: 20.696
- type: recall_at_10
value: 48.041
- type: recall_at_100
value: 71.316
- type: recall_at_1000
value: 92.794
- type: recall_at_3
value: 32.983000000000004
- type: recall_at_5
value: 38.627
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: cqadupstack/wordpress
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.567000000000002
- type: map_at_10
value: 19.326
- type: map_at_100
value: 20.164
- type: map_at_1000
value: 20.283
- type: map_at_3
value: 17.613
- type: map_at_5
value: 18.519
- type: mrr_at_1
value: 15.157000000000002
- type: mrr_at_10
value: 21.38
- type: mrr_at_100
value: 22.163
- type: mrr_at_1000
value: 22.261
- type: mrr_at_3
value: 19.624
- type: mrr_at_5
value: 20.548
- type: ndcg_at_1
value: 15.157000000000002
- type: ndcg_at_10
value: 23.044999999999998
- type: ndcg_at_100
value: 27.586
- type: ndcg_at_1000
value: 30.848
- type: ndcg_at_3
value: 19.506999999999998
- type: ndcg_at_5
value: 21.101
- type: precision_at_1
value: 15.157000000000002
- type: precision_at_10
value: 3.7150000000000003
- type: precision_at_100
value: 0.651
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 8.626000000000001
- type: precision_at_5
value: 6.026
- type: recall_at_1
value: 13.567000000000002
- type: recall_at_10
value: 32.646
- type: recall_at_100
value: 54.225
- type: recall_at_1000
value: 79.12700000000001
- type: recall_at_3
value: 22.994
- type: recall_at_5
value: 26.912999999999997
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.26
- type: map_at_10
value: 15.109
- type: map_at_100
value: 17.155
- type: map_at_1000
value: 17.354
- type: map_at_3
value: 11.772
- type: map_at_5
value: 13.542000000000002
- type: mrr_at_1
value: 16.678
- type: mrr_at_10
value: 29.470000000000002
- type: mrr_at_100
value: 30.676
- type: mrr_at_1000
value: 30.714999999999996
- type: mrr_at_3
value: 25.44
- type: mrr_at_5
value: 27.792
- type: ndcg_at_1
value: 16.678
- type: ndcg_at_10
value: 22.967000000000002
- type: ndcg_at_100
value: 31.253999999999998
- type: ndcg_at_1000
value: 34.748000000000005
- type: ndcg_at_3
value: 17.058
- type: ndcg_at_5
value: 19.43
- type: precision_at_1
value: 16.678
- type: precision_at_10
value: 7.974
- type: precision_at_100
value: 1.6740000000000002
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 13.681
- type: precision_at_5
value: 11.322000000000001
- type: recall_at_1
value: 7.26
- type: recall_at_10
value: 30.407
- type: recall_at_100
value: 59.073
- type: recall_at_1000
value: 78.58800000000001
- type: recall_at_3
value: 16.493
- type: recall_at_5
value: 22.453
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.176
- type: map_at_10
value: 11.951
- type: map_at_100
value: 16.208
- type: map_at_1000
value: 17.067
- type: map_at_3
value: 8.669
- type: map_at_5
value: 10.061
- type: mrr_at_1
value: 42.5
- type: mrr_at_10
value: 54.312000000000005
- type: mrr_at_100
value: 54.925999999999995
- type: mrr_at_1000
value: 54.959
- type: mrr_at_3
value: 52.292
- type: mrr_at_5
value: 53.554
- type: ndcg_at_1
value: 31.374999999999996
- type: ndcg_at_10
value: 25.480999999999998
- type: ndcg_at_100
value: 30.018
- type: ndcg_at_1000
value: 36.103
- type: ndcg_at_3
value: 27.712999999999997
- type: ndcg_at_5
value: 26.415
- type: precision_at_1
value: 42.5
- type: precision_at_10
value: 20.549999999999997
- type: precision_at_100
value: 6.387
- type: precision_at_1000
value: 1.204
- type: precision_at_3
value: 32.917
- type: precision_at_5
value: 27.400000000000002
- type: recall_at_1
value: 5.176
- type: recall_at_10
value: 18.335
- type: recall_at_100
value: 38.629999999999995
- type: recall_at_1000
value: 59.74699999999999
- type: recall_at_3
value: 10.36
- type: recall_at_5
value: 13.413
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.885
- type: f1
value: 44.330258440550644
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.211
- type: map_at_10
value: 37.946999999999996
- type: map_at_100
value: 38.852
- type: map_at_1000
value: 38.896
- type: map_at_3
value: 34.445
- type: map_at_5
value: 36.451
- type: mrr_at_1
value: 27.453
- type: mrr_at_10
value: 40.505
- type: mrr_at_100
value: 41.342
- type: mrr_at_1000
value: 41.377
- type: mrr_at_3
value: 36.971
- type: mrr_at_5
value: 39.013999999999996
- type: ndcg_at_1
value: 27.453
- type: ndcg_at_10
value: 45.106
- type: ndcg_at_100
value: 49.357
- type: ndcg_at_1000
value: 50.546
- type: ndcg_at_3
value: 38.063
- type: ndcg_at_5
value: 41.603
- type: precision_at_1
value: 27.453
- type: precision_at_10
value: 7.136000000000001
- type: precision_at_100
value: 0.9390000000000001
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.677
- type: precision_at_5
value: 11.899
- type: recall_at_1
value: 25.211
- type: recall_at_10
value: 64.964
- type: recall_at_100
value: 84.23
- type: recall_at_1000
value: 93.307
- type: recall_at_3
value: 45.936
- type: recall_at_5
value: 54.489
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.434
- type: map_at_10
value: 20.325
- type: map_at_100
value: 22.267
- type: map_at_1000
value: 22.46
- type: map_at_3
value: 16.864
- type: map_at_5
value: 18.584999999999997
- type: mrr_at_1
value: 24.074
- type: mrr_at_10
value: 32.487
- type: mrr_at_100
value: 33.595000000000006
- type: mrr_at_1000
value: 33.649
- type: mrr_at_3
value: 29.578
- type: mrr_at_5
value: 31.044
- type: ndcg_at_1
value: 24.074
- type: ndcg_at_10
value: 27.244
- type: ndcg_at_100
value: 35.244
- type: ndcg_at_1000
value: 38.964999999999996
- type: ndcg_at_3
value: 22.709
- type: ndcg_at_5
value: 24.114
- type: precision_at_1
value: 24.074
- type: precision_at_10
value: 8.21
- type: precision_at_100
value: 1.627
- type: precision_at_1000
value: 0.22999999999999998
- type: precision_at_3
value: 15.741
- type: precision_at_5
value: 12.037
- type: recall_at_1
value: 11.434
- type: recall_at_10
value: 35.423
- type: recall_at_100
value: 66.056
- type: recall_at_1000
value: 88.63799999999999
- type: recall_at_3
value: 20.968
- type: recall_at_5
value: 26.540999999999997
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.506
- type: map_at_10
value: 44.864
- type: map_at_100
value: 46.016
- type: map_at_1000
value: 46.1
- type: map_at_3
value: 41.518
- type: map_at_5
value: 43.461
- type: mrr_at_1
value: 61.013
- type: mrr_at_10
value: 69.918
- type: mrr_at_100
value: 70.327
- type: mrr_at_1000
value: 70.342
- type: mrr_at_3
value: 68.226
- type: mrr_at_5
value: 69.273
- type: ndcg_at_1
value: 61.013
- type: ndcg_at_10
value: 54.539
- type: ndcg_at_100
value: 58.819
- type: ndcg_at_1000
value: 60.473
- type: ndcg_at_3
value: 49.27
- type: ndcg_at_5
value: 51.993
- type: precision_at_1
value: 61.013
- type: precision_at_10
value: 11.757
- type: precision_at_100
value: 1.5110000000000001
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 31.339
- type: precision_at_5
value: 20.959
- type: recall_at_1
value: 30.506
- type: recall_at_10
value: 58.785
- type: recall_at_100
value: 75.55
- type: recall_at_1000
value: 86.455
- type: recall_at_3
value: 47.009
- type: recall_at_5
value: 52.397000000000006
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 77.954
- type: ap
value: 73.06067313842448
- type: f1
value: 77.6469083443121
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 7.7170000000000005
- type: map_at_10
value: 14.696000000000002
- type: map_at_100
value: 15.973
- type: map_at_1000
value: 16.079
- type: map_at_3
value: 12.059000000000001
- type: map_at_5
value: 13.478000000000002
- type: mrr_at_1
value: 7.9079999999999995
- type: mrr_at_10
value: 14.972
- type: mrr_at_100
value: 16.235
- type: mrr_at_1000
value: 16.337
- type: mrr_at_3
value: 12.323
- type: mrr_at_5
value: 13.751
- type: ndcg_at_1
value: 7.9079999999999995
- type: ndcg_at_10
value: 19.131
- type: ndcg_at_100
value: 25.868000000000002
- type: ndcg_at_1000
value: 28.823999999999998
- type: ndcg_at_3
value: 13.611
- type: ndcg_at_5
value: 16.178
- type: precision_at_1
value: 7.9079999999999995
- type: precision_at_10
value: 3.4259999999999997
- type: precision_at_100
value: 0.687
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 6.103
- type: precision_at_5
value: 4.951
- type: recall_at_1
value: 7.7170000000000005
- type: recall_at_10
value: 33.147999999999996
- type: recall_at_100
value: 65.55199999999999
- type: recall_at_1000
value: 88.748
- type: recall_at_3
value: 17.863
- type: recall_at_5
value: 24.083
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.48335613315093
- type: f1
value: 95.18813547597892
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 82.83857729138167
- type: f1
value: 63.61922697275075
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.65433759246805
- type: f1
value: 73.24385243140212
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.98655010087425
- type: f1
value: 79.3880305174127
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.109152457220606
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 26.928355856501696
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.73337424086118
- type: mrr
value: 30.753319352871074
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.303
- type: map_at_10
value: 9.653
- type: map_at_100
value: 11.952
- type: map_at_1000
value: 13.126999999999999
- type: map_at_3
value: 6.976
- type: map_at_5
value: 8.292
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 45.827
- type: mrr_at_100
value: 46.587
- type: mrr_at_1000
value: 46.635
- type: mrr_at_3
value: 43.344
- type: mrr_at_5
value: 44.876
- type: ndcg_at_1
value: 34.056
- type: ndcg_at_10
value: 27.161
- type: ndcg_at_100
value: 25.552999999999997
- type: ndcg_at_1000
value: 34.671
- type: ndcg_at_3
value: 31.267
- type: ndcg_at_5
value: 29.896
- type: precision_at_1
value: 35.604
- type: precision_at_10
value: 19.969
- type: precision_at_100
value: 6.115
- type: precision_at_1000
value: 1.892
- type: precision_at_3
value: 29.825000000000003
- type: precision_at_5
value: 26.253999999999998
- type: recall_at_1
value: 4.303
- type: recall_at_10
value: 14.033999999999999
- type: recall_at_100
value: 28.250999999999998
- type: recall_at_1000
value: 58.751
- type: recall_at_3
value: 8.257
- type: recall_at_5
value: 10.761999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.668000000000001
- type: map_at_10
value: 26.593
- type: map_at_100
value: 28.094
- type: map_at_1000
value: 28.155
- type: map_at_3
value: 22.054000000000002
- type: map_at_5
value: 24.583
- type: mrr_at_1
value: 17.063
- type: mrr_at_10
value: 29.061999999999998
- type: mrr_at_100
value: 30.281000000000002
- type: mrr_at_1000
value: 30.325000000000003
- type: mrr_at_3
value: 24.754
- type: mrr_at_5
value: 27.281
- type: ndcg_at_1
value: 17.034
- type: ndcg_at_10
value: 34.157
- type: ndcg_at_100
value: 40.988
- type: ndcg_at_1000
value: 42.382999999999996
- type: ndcg_at_3
value: 25.076999999999998
- type: ndcg_at_5
value: 29.572
- type: precision_at_1
value: 17.034
- type: precision_at_10
value: 6.561
- type: precision_at_100
value: 1.04
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.167
- type: precision_at_5
value: 9.809
- type: recall_at_1
value: 14.668000000000001
- type: recall_at_10
value: 55.291999999999994
- type: recall_at_100
value: 85.82
- type: recall_at_1000
value: 96.164
- type: recall_at_3
value: 31.208999999999996
- type: recall_at_5
value: 41.766
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.20899999999999
- type: map_at_10
value: 80.024
- type: map_at_100
value: 80.73
- type: map_at_1000
value: 80.753
- type: map_at_3
value: 76.82900000000001
- type: map_at_5
value: 78.866
- type: mrr_at_1
value: 76.25
- type: mrr_at_10
value: 83.382
- type: mrr_at_100
value: 83.535
- type: mrr_at_1000
value: 83.538
- type: mrr_at_3
value: 82.013
- type: mrr_at_5
value: 82.931
- type: ndcg_at_1
value: 76.25999999999999
- type: ndcg_at_10
value: 84.397
- type: ndcg_at_100
value: 85.988
- type: ndcg_at_1000
value: 86.18299999999999
- type: ndcg_at_3
value: 80.778
- type: ndcg_at_5
value: 82.801
- type: precision_at_1
value: 76.25999999999999
- type: precision_at_10
value: 12.952
- type: precision_at_100
value: 1.509
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.323
- type: precision_at_5
value: 23.524
- type: recall_at_1
value: 66.20899999999999
- type: recall_at_10
value: 93.108
- type: recall_at_100
value: 98.817
- type: recall_at_1000
value: 99.857
- type: recall_at_3
value: 83.031
- type: recall_at_5
value: 88.441
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 41.82535503883439
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.077510084458055
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.383
- type: map_at_10
value: 8.839
- type: map_at_100
value: 10.876
- type: map_at_1000
value: 11.201
- type: map_at_3
value: 6.361
- type: map_at_5
value: 7.536
- type: mrr_at_1
value: 16.6
- type: mrr_at_10
value: 26.003999999999998
- type: mrr_at_100
value: 27.271
- type: mrr_at_1000
value: 27.354
- type: mrr_at_3
value: 22.900000000000002
- type: mrr_at_5
value: 24.58
- type: ndcg_at_1
value: 16.6
- type: ndcg_at_10
value: 15.345
- type: ndcg_at_100
value: 23.659
- type: ndcg_at_1000
value: 29.537000000000003
- type: ndcg_at_3
value: 14.283999999999999
- type: ndcg_at_5
value: 12.509999999999998
- type: precision_at_1
value: 16.6
- type: precision_at_10
value: 8.17
- type: precision_at_100
value: 2.028
- type: precision_at_1000
value: 0.34299999999999997
- type: precision_at_3
value: 13.633000000000001
- type: precision_at_5
value: 11.16
- type: recall_at_1
value: 3.383
- type: recall_at_10
value: 16.557
- type: recall_at_100
value: 41.123
- type: recall_at_1000
value: 69.67999999999999
- type: recall_at_3
value: 8.298
- type: recall_at_5
value: 11.322000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 75.55405115197729
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 67.65074099726466
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 83.89765011154986
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 76.97256789216159
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 83.80216382863031
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 81.90574806413879
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 85.58485422591949
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 65.92967262944444
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 80.41509666334721
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 77.81287769479543
- type: mrr
value: 94.13409665860645
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.093999999999994
- type: map_at_10
value: 63.641999999999996
- type: map_at_100
value: 64.402
- type: map_at_1000
value: 64.416
- type: map_at_3
value: 60.878
- type: map_at_5
value: 62.778
- type: mrr_at_1
value: 55.333
- type: mrr_at_10
value: 65.139
- type: mrr_at_100
value: 65.75999999999999
- type: mrr_at_1000
value: 65.77199999999999
- type: mrr_at_3
value: 62.944
- type: mrr_at_5
value: 64.511
- type: ndcg_at_1
value: 55.333
- type: ndcg_at_10
value: 68.675
- type: ndcg_at_100
value: 71.794
- type: ndcg_at_1000
value: 72.18299999999999
- type: ndcg_at_3
value: 63.977
- type: ndcg_at_5
value: 66.866
- type: precision_at_1
value: 55.333
- type: precision_at_10
value: 9.232999999999999
- type: precision_at_100
value: 1.087
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.667
- type: precision_at_5
value: 17.0
- type: recall_at_1
value: 52.093999999999994
- type: recall_at_10
value: 82.506
- type: recall_at_100
value: 95.933
- type: recall_at_1000
value: 99.0
- type: recall_at_3
value: 70.078
- type: recall_at_5
value: 77.35600000000001
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.7128712871287
- type: cos_sim_ap
value: 91.30057039245253
- type: cos_sim_f1
value: 85.35480624056368
- type: cos_sim_precision
value: 85.91691995947315
- type: cos_sim_recall
value: 84.8
- type: dot_accuracy
value: 99.35346534653465
- type: dot_ap
value: 67.929309733355
- type: dot_f1
value: 63.94205897568547
- type: dot_precision
value: 66.2379421221865
- type: dot_recall
value: 61.8
- type: euclidean_accuracy
value: 99.69009900990099
- type: euclidean_ap
value: 89.62179420600057
- type: euclidean_f1
value: 83.93039918116682
- type: euclidean_precision
value: 85.9538784067086
- type: euclidean_recall
value: 82.0
- type: manhattan_accuracy
value: 99.70990099009902
- type: manhattan_ap
value: 90.29611631593602
- type: manhattan_f1
value: 84.81729284611424
- type: manhattan_precision
value: 87.38069989395547
- type: manhattan_recall
value: 82.39999999999999
- type: max_accuracy
value: 99.7128712871287
- type: max_ap
value: 91.30057039245253
- type: max_f1
value: 85.35480624056368
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.33611278831218
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.504437768624214
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.80014786474266
- type: mrr
value: 50.468909154570916
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.677648147466808
- type: cos_sim_spearman
value: 30.191761045901888
- type: dot_pearson
value: 23.16759191245942
- type: dot_spearman
value: 23.186942570638486
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.214
- type: map_at_10
value: 1.2309999999999999
- type: map_at_100
value: 5.867
- type: map_at_1000
value: 14.671999999999999
- type: map_at_3
value: 0.519
- type: map_at_5
value: 0.764
- type: mrr_at_1
value: 82.0
- type: mrr_at_10
value: 87.519
- type: mrr_at_100
value: 87.519
- type: mrr_at_1000
value: 87.536
- type: mrr_at_3
value: 86.333
- type: mrr_at_5
value: 87.233
- type: ndcg_at_1
value: 77.0
- type: ndcg_at_10
value: 55.665
- type: ndcg_at_100
value: 39.410000000000004
- type: ndcg_at_1000
value: 37.21
- type: ndcg_at_3
value: 65.263
- type: ndcg_at_5
value: 61.424
- type: precision_at_1
value: 82.0
- type: precision_at_10
value: 55.400000000000006
- type: precision_at_100
value: 39.04
- type: precision_at_1000
value: 16.788
- type: precision_at_3
value: 67.333
- type: precision_at_5
value: 62.8
- type: recall_at_1
value: 0.214
- type: recall_at_10
value: 1.4200000000000002
- type: recall_at_100
value: 9.231
- type: recall_at_1000
value: 35.136
- type: recall_at_3
value: 0.544
- type: recall_at_5
value: 0.832
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.41000000000000003
- type: map_at_10
value: 2.32
- type: map_at_100
value: 4.077
- type: map_at_1000
value: 4.9430000000000005
- type: map_at_3
value: 1.087
- type: map_at_5
value: 1.466
- type: mrr_at_1
value: 6.122
- type: mrr_at_10
value: 13.999
- type: mrr_at_100
value: 16.524
- type: mrr_at_1000
value: 16.567999999999998
- type: mrr_at_3
value: 11.224
- type: mrr_at_5
value: 13.163
- type: ndcg_at_1
value: 5.102
- type: ndcg_at_10
value: 6.542000000000001
- type: ndcg_at_100
value: 14.127
- type: ndcg_at_1000
value: 24.396
- type: ndcg_at_3
value: 5.653
- type: ndcg_at_5
value: 5.5649999999999995
- type: precision_at_1
value: 6.122
- type: precision_at_10
value: 7.142999999999999
- type: precision_at_100
value: 3.51
- type: precision_at_1000
value: 0.9860000000000001
- type: precision_at_3
value: 6.802999999999999
- type: precision_at_5
value: 6.938999999999999
- type: recall_at_1
value: 0.41000000000000003
- type: recall_at_10
value: 5.627
- type: recall_at_100
value: 23.121
- type: recall_at_1000
value: 54.626
- type: recall_at_3
value: 1.763
- type: recall_at_5
value: 3.013
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.71119999999999
- type: ap
value: 15.1342268718371
- type: f1
value: 55.043262693594855
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.89983022071308
- type: f1
value: 61.13086468149106
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 30.264802332456515
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.46086904690947
- type: cos_sim_ap
value: 68.76039123104324
- type: cos_sim_f1
value: 63.002224839680665
- type: cos_sim_precision
value: 62.503245910153204
- type: cos_sim_recall
value: 63.50923482849604
- type: dot_accuracy
value: 80.07391071109257
- type: dot_ap
value: 53.43322643579626
- type: dot_f1
value: 52.6850065983149
- type: dot_precision
value: 42.81471704339218
- type: dot_recall
value: 68.46965699208444
- type: euclidean_accuracy
value: 84.2701317279609
- type: euclidean_ap
value: 67.55078414631596
- type: euclidean_f1
value: 62.90723537877797
- type: euclidean_precision
value: 62.392940565792884
- type: euclidean_recall
value: 63.43007915567283
- type: manhattan_accuracy
value: 84.22244739822375
- type: manhattan_ap
value: 67.92488847948273
- type: manhattan_f1
value: 62.99132210311383
- type: manhattan_precision
value: 60.99851705388038
- type: manhattan_recall
value: 65.11873350923483
- type: max_accuracy
value: 84.46086904690947
- type: max_ap
value: 68.76039123104324
- type: max_f1
value: 63.002224839680665
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.71296619707377
- type: cos_sim_ap
value: 82.76174215711472
- type: cos_sim_f1
value: 75.73585592141168
- type: cos_sim_precision
value: 71.79416430985721
- type: cos_sim_recall
value: 80.1355097012627
- type: dot_accuracy
value: 85.62502425583111
- type: dot_ap
value: 77.50549495030725
- type: dot_f1
value: 71.47900863425035
- type: dot_precision
value: 65.4587361546834
- type: dot_recall
value: 78.71881736987989
- type: euclidean_accuracy
value: 87.12694531765437
- type: euclidean_ap
value: 81.63583409712018
- type: euclidean_f1
value: 74.50966015324268
- type: euclidean_precision
value: 71.11764294212331
- type: euclidean_recall
value: 78.24145364952264
- type: manhattan_accuracy
value: 87.35009896379088
- type: manhattan_ap
value: 82.20417545366242
- type: manhattan_f1
value: 74.84157622550805
- type: manhattan_precision
value: 71.00898410504493
- type: manhattan_recall
value: 79.11148752694795
- type: max_accuracy
value: 87.71296619707377
- type: max_ap
value: 82.76174215711472
- type: max_f1
value: 75.73585592141168
---
# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec import LLM2Vec
import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
# Loading unsupervised SimCSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + SimCSE (LoRA).
model = PeftModel.from_pretrained(
model, "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-unsup-simcse"
)
# Wrapper for encoding and pooling operations
l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1)
d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1)
cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))
print(cos_sim)
"""
tensor([[0.6175, 0.2535],
[0.2298, 0.5792]])
"""
```
## Questions
If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`). | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
ggml-org/e5-small-v2-Q8_0-GGUF | ggml-org | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:intfloat/e5-small-v2",
"base_model:quantized:intfloat/e5-small-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | 1,738 | 1,738 | 24 | 0 | ---
base_model: intfloat/e5-small-v2
language:
- en
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: e5-small-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.59701492537313
- type: ap
value: 41.67064885731708
- type: f1
value: 71.86465946398573
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.265875
- type: ap
value: 87.67633085349644
- type: f1
value: 91.24297521425744
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.882000000000005
- type: f1
value: 45.08058870381236
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.697
- type: map_at_10
value: 33.975
- type: map_at_100
value: 35.223
- type: map_at_1000
value: 35.260000000000005
- type: map_at_3
value: 29.776999999999997
- type: map_at_5
value: 32.035000000000004
- type: mrr_at_1
value: 20.982
- type: mrr_at_10
value: 34.094
- type: mrr_at_100
value: 35.343
- type: mrr_at_1000
value: 35.38
- type: mrr_at_3
value: 29.884
- type: mrr_at_5
value: 32.141999999999996
- type: ndcg_at_1
value: 20.697
- type: ndcg_at_10
value: 41.668
- type: ndcg_at_100
value: 47.397
- type: ndcg_at_1000
value: 48.305
- type: ndcg_at_3
value: 32.928000000000004
- type: ndcg_at_5
value: 36.998999999999995
- type: precision_at_1
value: 20.697
- type: precision_at_10
value: 6.636
- type: precision_at_100
value: 0.924
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.035
- type: precision_at_5
value: 10.398
- type: recall_at_1
value: 20.697
- type: recall_at_10
value: 66.35799999999999
- type: recall_at_100
value: 92.39
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 42.105
- type: recall_at_5
value: 51.991
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 42.1169517447068
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 34.79553720107097
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.10811337308168
- type: mrr
value: 71.56410763751482
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 78.46834918248696
- type: cos_sim_spearman
value: 79.4289182755206
- type: euclidean_pearson
value: 76.26662973727008
- type: euclidean_spearman
value: 78.11744260952536
- type: manhattan_pearson
value: 76.08175262609434
- type: manhattan_spearman
value: 78.29395265552289
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.63636363636364
- type: f1
value: 81.55779952376953
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.88541137137571
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.05205685274407
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.293999999999997
- type: map_at_10
value: 39.876
- type: map_at_100
value: 41.315000000000005
- type: map_at_1000
value: 41.451
- type: map_at_3
value: 37.194
- type: map_at_5
value: 38.728
- type: mrr_at_1
value: 37.053000000000004
- type: mrr_at_10
value: 45.281
- type: mrr_at_100
value: 46.188
- type: mrr_at_1000
value: 46.245999999999995
- type: mrr_at_3
value: 43.228
- type: mrr_at_5
value: 44.366
- type: ndcg_at_1
value: 37.053000000000004
- type: ndcg_at_10
value: 45.086
- type: ndcg_at_100
value: 50.756
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 41.416
- type: ndcg_at_5
value: 43.098
- type: precision_at_1
value: 37.053000000000004
- type: precision_at_10
value: 8.34
- type: precision_at_100
value: 1.346
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 19.647000000000002
- type: precision_at_5
value: 13.877
- type: recall_at_1
value: 30.293999999999997
- type: recall_at_10
value: 54.309
- type: recall_at_100
value: 78.59
- type: recall_at_1000
value: 93.82300000000001
- type: recall_at_3
value: 43.168
- type: recall_at_5
value: 48.192
- type: map_at_1
value: 28.738000000000003
- type: map_at_10
value: 36.925999999999995
- type: map_at_100
value: 38.017
- type: map_at_1000
value: 38.144
- type: map_at_3
value: 34.446
- type: map_at_5
value: 35.704
- type: mrr_at_1
value: 35.478
- type: mrr_at_10
value: 42.786
- type: mrr_at_100
value: 43.458999999999996
- type: mrr_at_1000
value: 43.507
- type: mrr_at_3
value: 40.648
- type: mrr_at_5
value: 41.804
- type: ndcg_at_1
value: 35.478
- type: ndcg_at_10
value: 42.044
- type: ndcg_at_100
value: 46.249
- type: ndcg_at_1000
value: 48.44
- type: ndcg_at_3
value: 38.314
- type: ndcg_at_5
value: 39.798
- type: precision_at_1
value: 35.478
- type: precision_at_10
value: 7.764
- type: precision_at_100
value: 1.253
- type: precision_at_1000
value: 0.174
- type: precision_at_3
value: 18.047
- type: precision_at_5
value: 12.637
- type: recall_at_1
value: 28.738000000000003
- type: recall_at_10
value: 50.659
- type: recall_at_100
value: 68.76299999999999
- type: recall_at_1000
value: 82.811
- type: recall_at_3
value: 39.536
- type: recall_at_5
value: 43.763999999999996
- type: map_at_1
value: 38.565
- type: map_at_10
value: 50.168
- type: map_at_100
value: 51.11
- type: map_at_1000
value: 51.173
- type: map_at_3
value: 47.044000000000004
- type: map_at_5
value: 48.838
- type: mrr_at_1
value: 44.201
- type: mrr_at_10
value: 53.596999999999994
- type: mrr_at_100
value: 54.211
- type: mrr_at_1000
value: 54.247
- type: mrr_at_3
value: 51.202000000000005
- type: mrr_at_5
value: 52.608999999999995
- type: ndcg_at_1
value: 44.201
- type: ndcg_at_10
value: 55.694
- type: ndcg_at_100
value: 59.518
- type: ndcg_at_1000
value: 60.907
- type: ndcg_at_3
value: 50.395999999999994
- type: ndcg_at_5
value: 53.022999999999996
- type: precision_at_1
value: 44.201
- type: precision_at_10
value: 8.84
- type: precision_at_100
value: 1.162
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 22.153
- type: precision_at_5
value: 15.260000000000002
- type: recall_at_1
value: 38.565
- type: recall_at_10
value: 68.65
- type: recall_at_100
value: 85.37400000000001
- type: recall_at_1000
value: 95.37400000000001
- type: recall_at_3
value: 54.645999999999994
- type: recall_at_5
value: 60.958
- type: map_at_1
value: 23.945
- type: map_at_10
value: 30.641000000000002
- type: map_at_100
value: 31.599
- type: map_at_1000
value: 31.691000000000003
- type: map_at_3
value: 28.405
- type: map_at_5
value: 29.704000000000004
- type: mrr_at_1
value: 25.537
- type: mrr_at_10
value: 32.22
- type: mrr_at_100
value: 33.138
- type: mrr_at_1000
value: 33.214
- type: mrr_at_3
value: 30.151
- type: mrr_at_5
value: 31.298
- type: ndcg_at_1
value: 25.537
- type: ndcg_at_10
value: 34.638000000000005
- type: ndcg_at_100
value: 39.486
- type: ndcg_at_1000
value: 41.936
- type: ndcg_at_3
value: 30.333
- type: ndcg_at_5
value: 32.482
- type: precision_at_1
value: 25.537
- type: precision_at_10
value: 5.153
- type: precision_at_100
value: 0.7929999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.429
- type: precision_at_5
value: 8.723
- type: recall_at_1
value: 23.945
- type: recall_at_10
value: 45.412
- type: recall_at_100
value: 67.836
- type: recall_at_1000
value: 86.467
- type: recall_at_3
value: 34.031
- type: recall_at_5
value: 39.039
- type: map_at_1
value: 14.419
- type: map_at_10
value: 20.858999999999998
- type: map_at_100
value: 22.067999999999998
- type: map_at_1000
value: 22.192
- type: map_at_3
value: 18.673000000000002
- type: map_at_5
value: 19.968
- type: mrr_at_1
value: 17.785999999999998
- type: mrr_at_10
value: 24.878
- type: mrr_at_100
value: 26.021
- type: mrr_at_1000
value: 26.095000000000002
- type: mrr_at_3
value: 22.616
- type: mrr_at_5
value: 23.785
- type: ndcg_at_1
value: 17.785999999999998
- type: ndcg_at_10
value: 25.153
- type: ndcg_at_100
value: 31.05
- type: ndcg_at_1000
value: 34.052
- type: ndcg_at_3
value: 21.117
- type: ndcg_at_5
value: 23.048
- type: precision_at_1
value: 17.785999999999998
- type: precision_at_10
value: 4.590000000000001
- type: precision_at_100
value: 0.864
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.313
- type: recall_at_1
value: 14.419
- type: recall_at_10
value: 34.477999999999994
- type: recall_at_100
value: 60.02499999999999
- type: recall_at_1000
value: 81.646
- type: recall_at_3
value: 23.515
- type: recall_at_5
value: 28.266999999999996
- type: map_at_1
value: 26.268
- type: map_at_10
value: 35.114000000000004
- type: map_at_100
value: 36.212
- type: map_at_1000
value: 36.333
- type: map_at_3
value: 32.436
- type: map_at_5
value: 33.992
- type: mrr_at_1
value: 31.761
- type: mrr_at_10
value: 40.355999999999995
- type: mrr_at_100
value: 41.125
- type: mrr_at_1000
value: 41.186
- type: mrr_at_3
value: 37.937
- type: mrr_at_5
value: 39.463
- type: ndcg_at_1
value: 31.761
- type: ndcg_at_10
value: 40.422000000000004
- type: ndcg_at_100
value: 45.458999999999996
- type: ndcg_at_1000
value: 47.951
- type: ndcg_at_3
value: 35.972
- type: ndcg_at_5
value: 38.272
- type: precision_at_1
value: 31.761
- type: precision_at_10
value: 7.103
- type: precision_at_100
value: 1.133
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 16.779
- type: precision_at_5
value: 11.877
- type: recall_at_1
value: 26.268
- type: recall_at_10
value: 51.053000000000004
- type: recall_at_100
value: 72.702
- type: recall_at_1000
value: 89.521
- type: recall_at_3
value: 38.619
- type: recall_at_5
value: 44.671
- type: map_at_1
value: 25.230999999999998
- type: map_at_10
value: 34.227000000000004
- type: map_at_100
value: 35.370000000000005
- type: map_at_1000
value: 35.488
- type: map_at_3
value: 31.496000000000002
- type: map_at_5
value: 33.034
- type: mrr_at_1
value: 30.822
- type: mrr_at_10
value: 39.045
- type: mrr_at_100
value: 39.809
- type: mrr_at_1000
value: 39.873
- type: mrr_at_3
value: 36.663000000000004
- type: mrr_at_5
value: 37.964
- type: ndcg_at_1
value: 30.822
- type: ndcg_at_10
value: 39.472
- type: ndcg_at_100
value: 44.574999999999996
- type: ndcg_at_1000
value: 47.162
- type: ndcg_at_3
value: 34.929
- type: ndcg_at_5
value: 37.002
- type: precision_at_1
value: 30.822
- type: precision_at_10
value: 7.055
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 16.591
- type: precision_at_5
value: 11.667
- type: recall_at_1
value: 25.230999999999998
- type: recall_at_10
value: 50.42100000000001
- type: recall_at_100
value: 72.685
- type: recall_at_1000
value: 90.469
- type: recall_at_3
value: 37.503
- type: recall_at_5
value: 43.123
- type: map_at_1
value: 24.604166666666664
- type: map_at_10
value: 32.427166666666665
- type: map_at_100
value: 33.51474999999999
- type: map_at_1000
value: 33.6345
- type: map_at_3
value: 30.02366666666667
- type: map_at_5
value: 31.382333333333328
- type: mrr_at_1
value: 29.001166666666666
- type: mrr_at_10
value: 36.3315
- type: mrr_at_100
value: 37.16683333333333
- type: mrr_at_1000
value: 37.23341666666668
- type: mrr_at_3
value: 34.19916666666667
- type: mrr_at_5
value: 35.40458333333334
- type: ndcg_at_1
value: 29.001166666666666
- type: ndcg_at_10
value: 37.06883333333334
- type: ndcg_at_100
value: 41.95816666666666
- type: ndcg_at_1000
value: 44.501583333333336
- type: ndcg_at_3
value: 32.973499999999994
- type: ndcg_at_5
value: 34.90833333333334
- type: precision_at_1
value: 29.001166666666666
- type: precision_at_10
value: 6.336
- type: precision_at_100
value: 1.0282499999999999
- type: precision_at_1000
value: 0.14391666666666664
- type: precision_at_3
value: 14.932499999999996
- type: precision_at_5
value: 10.50825
- type: recall_at_1
value: 24.604166666666664
- type: recall_at_10
value: 46.9525
- type: recall_at_100
value: 68.67816666666667
- type: recall_at_1000
value: 86.59783333333334
- type: recall_at_3
value: 35.49783333333333
- type: recall_at_5
value: 40.52525000000001
- type: map_at_1
value: 23.559
- type: map_at_10
value: 29.023
- type: map_at_100
value: 29.818
- type: map_at_1000
value: 29.909000000000002
- type: map_at_3
value: 27.037
- type: map_at_5
value: 28.225
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 31.962000000000003
- type: mrr_at_100
value: 32.726
- type: mrr_at_1000
value: 32.800000000000004
- type: mrr_at_3
value: 30.266
- type: mrr_at_5
value: 31.208999999999996
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 32.53
- type: ndcg_at_100
value: 36.758
- type: ndcg_at_1000
value: 39.362
- type: ndcg_at_3
value: 28.985
- type: ndcg_at_5
value: 30.757
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 4.968999999999999
- type: precision_at_100
value: 0.759
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 12.219
- type: precision_at_5
value: 8.527999999999999
- type: recall_at_1
value: 23.559
- type: recall_at_10
value: 40.585
- type: recall_at_100
value: 60.306000000000004
- type: recall_at_1000
value: 80.11
- type: recall_at_3
value: 30.794
- type: recall_at_5
value: 35.186
- type: map_at_1
value: 16.384999999999998
- type: map_at_10
value: 22.142
- type: map_at_100
value: 23.057
- type: map_at_1000
value: 23.177
- type: map_at_3
value: 20.29
- type: map_at_5
value: 21.332
- type: mrr_at_1
value: 19.89
- type: mrr_at_10
value: 25.771
- type: mrr_at_100
value: 26.599
- type: mrr_at_1000
value: 26.680999999999997
- type: mrr_at_3
value: 23.962
- type: mrr_at_5
value: 24.934
- type: ndcg_at_1
value: 19.89
- type: ndcg_at_10
value: 25.97
- type: ndcg_at_100
value: 30.605
- type: ndcg_at_1000
value: 33.619
- type: ndcg_at_3
value: 22.704
- type: ndcg_at_5
value: 24.199
- type: precision_at_1
value: 19.89
- type: precision_at_10
value: 4.553
- type: precision_at_100
value: 0.8049999999999999
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 10.541
- type: precision_at_5
value: 7.46
- type: recall_at_1
value: 16.384999999999998
- type: recall_at_10
value: 34.001
- type: recall_at_100
value: 55.17100000000001
- type: recall_at_1000
value: 77.125
- type: recall_at_3
value: 24.618000000000002
- type: recall_at_5
value: 28.695999999999998
- type: map_at_1
value: 23.726
- type: map_at_10
value: 31.227
- type: map_at_100
value: 32.311
- type: map_at_1000
value: 32.419
- type: map_at_3
value: 28.765
- type: map_at_5
value: 30.229
- type: mrr_at_1
value: 27.705000000000002
- type: mrr_at_10
value: 35.085
- type: mrr_at_100
value: 35.931000000000004
- type: mrr_at_1000
value: 36
- type: mrr_at_3
value: 32.603
- type: mrr_at_5
value: 34.117999999999995
- type: ndcg_at_1
value: 27.705000000000002
- type: ndcg_at_10
value: 35.968
- type: ndcg_at_100
value: 41.197
- type: ndcg_at_1000
value: 43.76
- type: ndcg_at_3
value: 31.304
- type: ndcg_at_5
value: 33.661
- type: precision_at_1
value: 27.705000000000002
- type: precision_at_10
value: 5.942
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 13.868
- type: precision_at_5
value: 9.944
- type: recall_at_1
value: 23.726
- type: recall_at_10
value: 46.786
- type: recall_at_100
value: 70.072
- type: recall_at_1000
value: 88.2
- type: recall_at_3
value: 33.981
- type: recall_at_5
value: 39.893
- type: map_at_1
value: 23.344
- type: map_at_10
value: 31.636999999999997
- type: map_at_100
value: 33.065
- type: map_at_1000
value: 33.300000000000004
- type: map_at_3
value: 29.351
- type: map_at_5
value: 30.432
- type: mrr_at_1
value: 27.866000000000003
- type: mrr_at_10
value: 35.587
- type: mrr_at_100
value: 36.52
- type: mrr_at_1000
value: 36.597
- type: mrr_at_3
value: 33.696
- type: mrr_at_5
value: 34.713
- type: ndcg_at_1
value: 27.866000000000003
- type: ndcg_at_10
value: 36.61
- type: ndcg_at_100
value: 41.88
- type: ndcg_at_1000
value: 45.105000000000004
- type: ndcg_at_3
value: 33.038000000000004
- type: ndcg_at_5
value: 34.331
- type: precision_at_1
value: 27.866000000000003
- type: precision_at_10
value: 6.917
- type: precision_at_100
value: 1.3599999999999999
- type: precision_at_1000
value: 0.233
- type: precision_at_3
value: 15.547
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.344
- type: recall_at_10
value: 45.782000000000004
- type: recall_at_100
value: 69.503
- type: recall_at_1000
value: 90.742
- type: recall_at_3
value: 35.160000000000004
- type: recall_at_5
value: 39.058
- type: map_at_1
value: 20.776
- type: map_at_10
value: 27.285999999999998
- type: map_at_100
value: 28.235
- type: map_at_1000
value: 28.337
- type: map_at_3
value: 25.147000000000002
- type: map_at_5
value: 26.401999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 29.409999999999997
- type: mrr_at_100
value: 30.275000000000002
- type: mrr_at_1000
value: 30.354999999999997
- type: mrr_at_3
value: 27.418
- type: mrr_at_5
value: 28.592000000000002
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 31.239
- type: ndcg_at_100
value: 35.965
- type: ndcg_at_1000
value: 38.602
- type: ndcg_at_3
value: 27.174
- type: ndcg_at_5
value: 29.229
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 4.806
- type: precision_at_100
value: 0.776
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.022
- type: recall_at_1
value: 20.776
- type: recall_at_10
value: 41.294
- type: recall_at_100
value: 63.111
- type: recall_at_1000
value: 82.88600000000001
- type: recall_at_3
value: 30.403000000000002
- type: recall_at_5
value: 35.455999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.376
- type: map_at_10
value: 15.926000000000002
- type: map_at_100
value: 17.585
- type: map_at_1000
value: 17.776
- type: map_at_3
value: 13.014000000000001
- type: map_at_5
value: 14.417
- type: mrr_at_1
value: 20.195
- type: mrr_at_10
value: 29.95
- type: mrr_at_100
value: 31.052000000000003
- type: mrr_at_1000
value: 31.108000000000004
- type: mrr_at_3
value: 26.667
- type: mrr_at_5
value: 28.458
- type: ndcg_at_1
value: 20.195
- type: ndcg_at_10
value: 22.871
- type: ndcg_at_100
value: 29.921999999999997
- type: ndcg_at_1000
value: 33.672999999999995
- type: ndcg_at_3
value: 17.782999999999998
- type: ndcg_at_5
value: 19.544
- type: precision_at_1
value: 20.195
- type: precision_at_10
value: 7.394
- type: precision_at_100
value: 1.493
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 13.073
- type: precision_at_5
value: 10.436
- type: recall_at_1
value: 9.376
- type: recall_at_10
value: 28.544999999999998
- type: recall_at_100
value: 53.147999999999996
- type: recall_at_1000
value: 74.62
- type: recall_at_3
value: 16.464000000000002
- type: recall_at_5
value: 21.004
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.415000000000001
- type: map_at_10
value: 18.738
- type: map_at_100
value: 27.291999999999998
- type: map_at_1000
value: 28.992
- type: map_at_3
value: 13.196
- type: map_at_5
value: 15.539
- type: mrr_at_1
value: 66.5
- type: mrr_at_10
value: 74.518
- type: mrr_at_100
value: 74.86
- type: mrr_at_1000
value: 74.87
- type: mrr_at_3
value: 72.375
- type: mrr_at_5
value: 73.86200000000001
- type: ndcg_at_1
value: 54.37499999999999
- type: ndcg_at_10
value: 41.317
- type: ndcg_at_100
value: 45.845
- type: ndcg_at_1000
value: 52.92
- type: ndcg_at_3
value: 44.983000000000004
- type: ndcg_at_5
value: 42.989
- type: precision_at_1
value: 66.5
- type: precision_at_10
value: 33.6
- type: precision_at_100
value: 10.972999999999999
- type: precision_at_1000
value: 2.214
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.15
- type: recall_at_1
value: 8.415000000000001
- type: recall_at_10
value: 24.953
- type: recall_at_100
value: 52.48199999999999
- type: recall_at_1000
value: 75.093
- type: recall_at_3
value: 14.341000000000001
- type: recall_at_5
value: 18.468
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.06499999999999
- type: f1
value: 41.439327599975385
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 66.02
- type: map_at_10
value: 76.68599999999999
- type: map_at_100
value: 76.959
- type: map_at_1000
value: 76.972
- type: map_at_3
value: 75.024
- type: map_at_5
value: 76.153
- type: mrr_at_1
value: 71.197
- type: mrr_at_10
value: 81.105
- type: mrr_at_100
value: 81.232
- type: mrr_at_1000
value: 81.233
- type: mrr_at_3
value: 79.758
- type: mrr_at_5
value: 80.69
- type: ndcg_at_1
value: 71.197
- type: ndcg_at_10
value: 81.644
- type: ndcg_at_100
value: 82.645
- type: ndcg_at_1000
value: 82.879
- type: ndcg_at_3
value: 78.792
- type: ndcg_at_5
value: 80.528
- type: precision_at_1
value: 71.197
- type: precision_at_10
value: 10.206999999999999
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 30.868000000000002
- type: precision_at_5
value: 19.559
- type: recall_at_1
value: 66.02
- type: recall_at_10
value: 92.50699999999999
- type: recall_at_100
value: 96.497
- type: recall_at_1000
value: 97.956
- type: recall_at_3
value: 84.866
- type: recall_at_5
value: 89.16199999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.948
- type: map_at_10
value: 29.833
- type: map_at_100
value: 31.487
- type: map_at_1000
value: 31.674000000000003
- type: map_at_3
value: 26.029999999999998
- type: map_at_5
value: 28.038999999999998
- type: mrr_at_1
value: 34.721999999999994
- type: mrr_at_10
value: 44.214999999999996
- type: mrr_at_100
value: 44.994
- type: mrr_at_1000
value: 45.051
- type: mrr_at_3
value: 41.667
- type: mrr_at_5
value: 43.032
- type: ndcg_at_1
value: 34.721999999999994
- type: ndcg_at_10
value: 37.434
- type: ndcg_at_100
value: 43.702000000000005
- type: ndcg_at_1000
value: 46.993
- type: ndcg_at_3
value: 33.56
- type: ndcg_at_5
value: 34.687
- type: precision_at_1
value: 34.721999999999994
- type: precision_at_10
value: 10.401
- type: precision_at_100
value: 1.7049999999999998
- type: precision_at_1000
value: 0.22799999999999998
- type: precision_at_3
value: 22.531000000000002
- type: precision_at_5
value: 16.42
- type: recall_at_1
value: 17.948
- type: recall_at_10
value: 45.062999999999995
- type: recall_at_100
value: 68.191
- type: recall_at_1000
value: 87.954
- type: recall_at_3
value: 31.112000000000002
- type: recall_at_5
value: 36.823
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.644
- type: map_at_10
value: 57.658
- type: map_at_100
value: 58.562000000000005
- type: map_at_1000
value: 58.62500000000001
- type: map_at_3
value: 54.022999999999996
- type: map_at_5
value: 56.293000000000006
- type: mrr_at_1
value: 73.288
- type: mrr_at_10
value: 80.51700000000001
- type: mrr_at_100
value: 80.72
- type: mrr_at_1000
value: 80.728
- type: mrr_at_3
value: 79.33200000000001
- type: mrr_at_5
value: 80.085
- type: ndcg_at_1
value: 73.288
- type: ndcg_at_10
value: 66.61
- type: ndcg_at_100
value: 69.723
- type: ndcg_at_1000
value: 70.96000000000001
- type: ndcg_at_3
value: 61.358999999999995
- type: ndcg_at_5
value: 64.277
- type: precision_at_1
value: 73.288
- type: precision_at_10
value: 14.17
- type: precision_at_100
value: 1.659
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 39.487
- type: precision_at_5
value: 25.999
- type: recall_at_1
value: 36.644
- type: recall_at_10
value: 70.851
- type: recall_at_100
value: 82.94399999999999
- type: recall_at_1000
value: 91.134
- type: recall_at_3
value: 59.230000000000004
- type: recall_at_5
value: 64.997
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 86.00280000000001
- type: ap
value: 80.46302061021223
- type: f1
value: 85.9592921596419
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.541
- type: map_at_10
value: 34.625
- type: map_at_100
value: 35.785
- type: map_at_1000
value: 35.831
- type: map_at_3
value: 30.823
- type: map_at_5
value: 32.967999999999996
- type: mrr_at_1
value: 23.180999999999997
- type: mrr_at_10
value: 35.207
- type: mrr_at_100
value: 36.315
- type: mrr_at_1000
value: 36.355
- type: mrr_at_3
value: 31.483
- type: mrr_at_5
value: 33.589999999999996
- type: ndcg_at_1
value: 23.195
- type: ndcg_at_10
value: 41.461
- type: ndcg_at_100
value: 47.032000000000004
- type: ndcg_at_1000
value: 48.199999999999996
- type: ndcg_at_3
value: 33.702
- type: ndcg_at_5
value: 37.522
- type: precision_at_1
value: 23.195
- type: precision_at_10
value: 6.526999999999999
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 14.308000000000002
- type: precision_at_5
value: 10.507
- type: recall_at_1
value: 22.541
- type: recall_at_10
value: 62.524
- type: recall_at_100
value: 88.228
- type: recall_at_1000
value: 97.243
- type: recall_at_3
value: 41.38
- type: recall_at_5
value: 50.55
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.69949840401279
- type: f1
value: 92.54141471311786
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 72.56041951664386
- type: f1
value: 55.88499977508287
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.62071284465365
- type: f1
value: 69.36717546572152
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.35843981170142
- type: f1
value: 76.15496453538884
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.33664956793118
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.883839621715524
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.096874986740758
- type: mrr
value: 30.97300481932132
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.4
- type: map_at_10
value: 11.852
- type: map_at_100
value: 14.758
- type: map_at_1000
value: 16.134
- type: map_at_3
value: 8.558
- type: map_at_5
value: 10.087
- type: mrr_at_1
value: 44.272
- type: mrr_at_10
value: 52.05800000000001
- type: mrr_at_100
value: 52.689
- type: mrr_at_1000
value: 52.742999999999995
- type: mrr_at_3
value: 50.205999999999996
- type: mrr_at_5
value: 51.367
- type: ndcg_at_1
value: 42.57
- type: ndcg_at_10
value: 32.449
- type: ndcg_at_100
value: 29.596
- type: ndcg_at_1000
value: 38.351
- type: ndcg_at_3
value: 37.044
- type: ndcg_at_5
value: 35.275
- type: precision_at_1
value: 44.272
- type: precision_at_10
value: 23.87
- type: precision_at_100
value: 7.625
- type: precision_at_1000
value: 2.045
- type: precision_at_3
value: 34.365
- type: precision_at_5
value: 30.341
- type: recall_at_1
value: 5.4
- type: recall_at_10
value: 15.943999999999999
- type: recall_at_100
value: 29.805
- type: recall_at_1000
value: 61.695
- type: recall_at_3
value: 9.539
- type: recall_at_5
value: 12.127
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.047000000000004
- type: map_at_10
value: 51.6
- type: map_at_100
value: 52.449999999999996
- type: map_at_1000
value: 52.476
- type: map_at_3
value: 47.452
- type: map_at_5
value: 49.964
- type: mrr_at_1
value: 40.382
- type: mrr_at_10
value: 54.273
- type: mrr_at_100
value: 54.859
- type: mrr_at_1000
value: 54.876000000000005
- type: mrr_at_3
value: 51.014
- type: mrr_at_5
value: 52.983999999999995
- type: ndcg_at_1
value: 40.353
- type: ndcg_at_10
value: 59.11300000000001
- type: ndcg_at_100
value: 62.604000000000006
- type: ndcg_at_1000
value: 63.187000000000005
- type: ndcg_at_3
value: 51.513
- type: ndcg_at_5
value: 55.576
- type: precision_at_1
value: 40.353
- type: precision_at_10
value: 9.418
- type: precision_at_100
value: 1.1440000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.078000000000003
- type: precision_at_5
value: 16.250999999999998
- type: recall_at_1
value: 36.047000000000004
- type: recall_at_10
value: 79.22200000000001
- type: recall_at_100
value: 94.23
- type: recall_at_1000
value: 98.51100000000001
- type: recall_at_3
value: 59.678
- type: recall_at_5
value: 68.967
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.232
- type: map_at_10
value: 81.674
- type: map_at_100
value: 82.338
- type: map_at_1000
value: 82.36099999999999
- type: map_at_3
value: 78.833
- type: map_at_5
value: 80.58
- type: mrr_at_1
value: 78.64
- type: mrr_at_10
value: 85.164
- type: mrr_at_100
value: 85.317
- type: mrr_at_1000
value: 85.319
- type: mrr_at_3
value: 84.127
- type: mrr_at_5
value: 84.789
- type: ndcg_at_1
value: 78.63
- type: ndcg_at_10
value: 85.711
- type: ndcg_at_100
value: 87.238
- type: ndcg_at_1000
value: 87.444
- type: ndcg_at_3
value: 82.788
- type: ndcg_at_5
value: 84.313
- type: precision_at_1
value: 78.63
- type: precision_at_10
value: 12.977
- type: precision_at_100
value: 1.503
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.113
- type: precision_at_5
value: 23.71
- type: recall_at_1
value: 68.232
- type: recall_at_10
value: 93.30199999999999
- type: recall_at_100
value: 98.799
- type: recall_at_1000
value: 99.885
- type: recall_at_3
value: 84.827
- type: recall_at_5
value: 89.188
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.71879170816294
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 59.65866311751794
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.218
- type: map_at_10
value: 10.337
- type: map_at_100
value: 12.131
- type: map_at_1000
value: 12.411
- type: map_at_3
value: 7.4270000000000005
- type: map_at_5
value: 8.913
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 30.868000000000002
- type: mrr_at_100
value: 31.903
- type: mrr_at_1000
value: 31.972
- type: mrr_at_3
value: 27.367
- type: mrr_at_5
value: 29.372
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.765
- type: ndcg_at_100
value: 24.914
- type: ndcg_at_1000
value: 30.206
- type: ndcg_at_3
value: 16.64
- type: ndcg_at_5
value: 14.712
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 9.24
- type: precision_at_100
value: 1.9560000000000002
- type: precision_at_1000
value: 0.32299999999999995
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.94
- type: recall_at_1
value: 4.218
- type: recall_at_10
value: 18.752
- type: recall_at_100
value: 39.7
- type: recall_at_1000
value: 65.57300000000001
- type: recall_at_3
value: 9.428
- type: recall_at_5
value: 13.133000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.04338850207233
- type: cos_sim_spearman
value: 78.5054651430423
- type: euclidean_pearson
value: 80.30739451228612
- type: euclidean_spearman
value: 78.48377464299097
- type: manhattan_pearson
value: 80.40795049052781
- type: manhattan_spearman
value: 78.49506205443114
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.11596224442962
- type: cos_sim_spearman
value: 76.20997388935461
- type: euclidean_pearson
value: 80.56858451349109
- type: euclidean_spearman
value: 75.92659183871186
- type: manhattan_pearson
value: 80.60246102203844
- type: manhattan_spearman
value: 76.03018971432664
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 81.34691640755737
- type: cos_sim_spearman
value: 82.4018369631579
- type: euclidean_pearson
value: 81.87673092245366
- type: euclidean_spearman
value: 82.3671489960678
- type: manhattan_pearson
value: 81.88222387719948
- type: manhattan_spearman
value: 82.3816590344736
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.2836092579524
- type: cos_sim_spearman
value: 78.99982781772064
- type: euclidean_pearson
value: 80.5184271010527
- type: euclidean_spearman
value: 78.89777392101904
- type: manhattan_pearson
value: 80.53585705018664
- type: manhattan_spearman
value: 78.92898405472994
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.7349907750784
- type: cos_sim_spearman
value: 87.7611234446225
- type: euclidean_pearson
value: 86.98759326731624
- type: euclidean_spearman
value: 87.58321319424618
- type: manhattan_pearson
value: 87.03483090370842
- type: manhattan_spearman
value: 87.63278333060288
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 81.75873694924825
- type: cos_sim_spearman
value: 83.80237999094724
- type: euclidean_pearson
value: 83.55023725861537
- type: euclidean_spearman
value: 84.12744338577744
- type: manhattan_pearson
value: 83.58816983036232
- type: manhattan_spearman
value: 84.18520748676501
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.21630882940174
- type: cos_sim_spearman
value: 87.72382883437031
- type: euclidean_pearson
value: 88.69933350930333
- type: euclidean_spearman
value: 88.24660814383081
- type: manhattan_pearson
value: 88.77331018833499
- type: manhattan_spearman
value: 88.26109989380632
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 61.11854063060489
- type: cos_sim_spearman
value: 63.14678634195072
- type: euclidean_pearson
value: 61.679090067000864
- type: euclidean_spearman
value: 62.28876589509653
- type: manhattan_pearson
value: 62.082324165511004
- type: manhattan_spearman
value: 62.56030932816679
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.00319882832645
- type: cos_sim_spearman
value: 85.94529772647257
- type: euclidean_pearson
value: 85.6661390122756
- type: euclidean_spearman
value: 85.97747815545827
- type: manhattan_pearson
value: 85.58422770541893
- type: manhattan_spearman
value: 85.9237139181532
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.16198731863916
- type: mrr
value: 94.25202702163487
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.761
- type: map_at_10
value: 64.396
- type: map_at_100
value: 65.07
- type: map_at_1000
value: 65.09899999999999
- type: map_at_3
value: 61.846000000000004
- type: map_at_5
value: 63.284
- type: mrr_at_1
value: 57.667
- type: mrr_at_10
value: 65.83099999999999
- type: mrr_at_100
value: 66.36800000000001
- type: mrr_at_1000
value: 66.39399999999999
- type: mrr_at_3
value: 64.056
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 57.667
- type: ndcg_at_10
value: 68.854
- type: ndcg_at_100
value: 71.59100000000001
- type: ndcg_at_1000
value: 72.383
- type: ndcg_at_3
value: 64.671
- type: ndcg_at_5
value: 66.796
- type: precision_at_1
value: 57.667
- type: precision_at_10
value: 9.167
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.667
- type: recall_at_1
value: 54.761
- type: recall_at_10
value: 80.9
- type: recall_at_100
value: 92.767
- type: recall_at_1000
value: 99
- type: recall_at_3
value: 69.672
- type: recall_at_5
value: 75.083
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8079207920792
- type: cos_sim_ap
value: 94.88470927617445
- type: cos_sim_f1
value: 90.08179959100204
- type: cos_sim_precision
value: 92.15481171548117
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.58613861386138
- type: dot_ap
value: 82.94822578881316
- type: dot_f1
value: 77.33333333333333
- type: dot_precision
value: 79.36842105263158
- type: dot_recall
value: 75.4
- type: euclidean_accuracy
value: 99.8069306930693
- type: euclidean_ap
value: 94.81367858031837
- type: euclidean_f1
value: 90.01009081735621
- type: euclidean_precision
value: 90.83503054989816
- type: euclidean_recall
value: 89.2
- type: manhattan_accuracy
value: 99.81188118811882
- type: manhattan_ap
value: 94.91405337220161
- type: manhattan_f1
value: 90.2763561924258
- type: manhattan_precision
value: 92.45283018867924
- type: manhattan_recall
value: 88.2
- type: max_accuracy
value: 99.81188118811882
- type: max_ap
value: 94.91405337220161
- type: max_f1
value: 90.2763561924258
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.511599500053094
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 31.984728147814707
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.93428193939015
- type: mrr
value: 50.916557911043206
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.562500894537145
- type: cos_sim_spearman
value: 31.162587976726307
- type: dot_pearson
value: 22.633662187735762
- type: dot_spearman
value: 22.723000282378962
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.219
- type: map_at_10
value: 1.871
- type: map_at_100
value: 10.487
- type: map_at_1000
value: 25.122
- type: map_at_3
value: 0.657
- type: map_at_5
value: 1.0699999999999998
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 89.567
- type: mrr_at_100
value: 89.748
- type: mrr_at_1000
value: 89.748
- type: mrr_at_3
value: 88.667
- type: mrr_at_5
value: 89.567
- type: ndcg_at_1
value: 80
- type: ndcg_at_10
value: 74.533
- type: ndcg_at_100
value: 55.839000000000006
- type: ndcg_at_1000
value: 49.748
- type: ndcg_at_3
value: 79.53099999999999
- type: ndcg_at_5
value: 78.245
- type: precision_at_1
value: 84
- type: precision_at_10
value: 78.4
- type: precision_at_100
value: 56.99999999999999
- type: precision_at_1000
value: 21.98
- type: precision_at_3
value: 85.333
- type: precision_at_5
value: 84.8
- type: recall_at_1
value: 0.219
- type: recall_at_10
value: 2.02
- type: recall_at_100
value: 13.555
- type: recall_at_1000
value: 46.739999999999995
- type: recall_at_3
value: 0.685
- type: recall_at_5
value: 1.13
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.5029999999999997
- type: map_at_10
value: 11.042
- type: map_at_100
value: 16.326999999999998
- type: map_at_1000
value: 17.836
- type: map_at_3
value: 6.174
- type: map_at_5
value: 7.979
- type: mrr_at_1
value: 42.857
- type: mrr_at_10
value: 52.617000000000004
- type: mrr_at_100
value: 53.351000000000006
- type: mrr_at_1000
value: 53.351000000000006
- type: mrr_at_3
value: 46.939
- type: mrr_at_5
value: 50.714000000000006
- type: ndcg_at_1
value: 38.775999999999996
- type: ndcg_at_10
value: 27.125
- type: ndcg_at_100
value: 35.845
- type: ndcg_at_1000
value: 47.377
- type: ndcg_at_3
value: 29.633
- type: ndcg_at_5
value: 28.378999999999998
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 24.082
- type: precision_at_100
value: 6.877999999999999
- type: precision_at_1000
value: 1.463
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 28.571
- type: recall_at_1
value: 3.5029999999999997
- type: recall_at_10
value: 17.068
- type: recall_at_100
value: 43.361
- type: recall_at_1000
value: 78.835
- type: recall_at_3
value: 6.821000000000001
- type: recall_at_5
value: 10.357
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.0954
- type: ap
value: 14.216844153511959
- type: f1
value: 54.63687418565117
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.46293152235427
- type: f1
value: 61.744177921638645
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 41.12708617788644
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.75430649102938
- type: cos_sim_ap
value: 73.34252536948081
- type: cos_sim_f1
value: 67.53758935173774
- type: cos_sim_precision
value: 63.3672525439408
- type: cos_sim_recall
value: 72.29551451187335
- type: dot_accuracy
value: 81.71305954580676
- type: dot_ap
value: 59.5532209082386
- type: dot_f1
value: 56.18466898954705
- type: dot_precision
value: 47.830923248053395
- type: dot_recall
value: 68.07387862796834
- type: euclidean_accuracy
value: 85.81987244441795
- type: euclidean_ap
value: 73.34325409809446
- type: euclidean_f1
value: 67.83451360417443
- type: euclidean_precision
value: 64.09955388588871
- type: euclidean_recall
value: 72.0316622691293
- type: manhattan_accuracy
value: 85.68277999642368
- type: manhattan_ap
value: 73.1535450121903
- type: manhattan_f1
value: 67.928237896289
- type: manhattan_precision
value: 63.56945722171113
- type: manhattan_recall
value: 72.9287598944591
- type: max_accuracy
value: 85.81987244441795
- type: max_ap
value: 73.34325409809446
- type: max_f1
value: 67.928237896289
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.90441262079403
- type: cos_sim_ap
value: 85.79331880741438
- type: cos_sim_f1
value: 78.31563529842548
- type: cos_sim_precision
value: 74.6683424102779
- type: cos_sim_recall
value: 82.33754234678165
- type: dot_accuracy
value: 84.89928978926534
- type: dot_ap
value: 75.25819218316
- type: dot_f1
value: 69.88730119720536
- type: dot_precision
value: 64.23362374959665
- type: dot_recall
value: 76.63227594702803
- type: euclidean_accuracy
value: 89.01695967710637
- type: euclidean_ap
value: 85.98986606038852
- type: euclidean_f1
value: 78.5277880014722
- type: euclidean_precision
value: 75.22211253701876
- type: euclidean_recall
value: 82.13735756082538
- type: manhattan_accuracy
value: 88.99561454573679
- type: manhattan_ap
value: 85.92262421793953
- type: manhattan_f1
value: 78.38866094740769
- type: manhattan_precision
value: 76.02373028505282
- type: manhattan_recall
value: 80.9054511857099
- type: max_accuracy
value: 89.01695967710637
- type: max_ap
value: 85.98986606038852
- type: max_f1
value: 78.5277880014722
---
# ggml-org/e5-small-v2-Q8_0-GGUF
This model was converted to GGUF format from [`intfloat/e5-small-v2`](https://huggingface.co/intfloat/e5-small-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/e5-small-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ggml-org/e5-small-v2-Q8_0-GGUF --hf-file e5-small-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ggml-org/e5-small-v2-Q8_0-GGUF --hf-file e5-small-v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ggml-org/e5-small-v2-Q8_0-GGUF --hf-file e5-small-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ggml-org/e5-small-v2-Q8_0-GGUF --hf-file e5-small-v2-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
michaelfeil/ct2fast-pythia-6.9b | michaelfeil | null | [
"transformers",
"ctranslate2",
"int8",
"float16",
"pytorch",
"causal-lm",
"pythia",
"en",
"dataset:the_pile",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,682 | 1,684 | 11 | 1 | ---
datasets:
- the_pile
language:
- en
license: apache-2.0
tags:
- ctranslate2
- int8
- float16
- pytorch
- causal-lm
- pythia
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [EleutherAI/pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b)
```bash
pip install hf-hub-ctranslate2>=2.0.6
```
Converted on 2023-05-19 using
```
ct2-transformers-converter --model EleutherAI/pythia-6.9b --output_dir /home/michael/tmp-ct2fast-pythia-6.9b --force --copy_files tokenizer.json README.md tokenizer_config.json special_tokens_map.json .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-pythia-6.9b"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained("EleutherAI/pythia-6.9b")
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-6.9B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-6.9B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-6.9B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-6.9B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-6.9B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-6.9B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-6.9B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-6.9B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-6.9B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] | Non_BioNLP |
newsrx/instructor-large-newsrx | newsrx | sentence-similarity | [
"sentence-transformers",
"pytorch",
"t5",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"prompt-retrieval",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"transformers",
"English",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2212.09741",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,687 | 1,687 | 13 | 0 | ---
language: en
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- prompt-retrieval
- text-reranking
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- t5
- English
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
inference: false
duplicated_from: hkunlp/instructor-large
model-index:
- name: INSTRUCTOR
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 88.13432835820896
- type: ap
value: 59.298209334395665
- type: f1
value: 83.31769058643586
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.526375
- type: ap
value: 88.16327709705504
- type: f1
value: 91.51095801287843
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.856
- type: f1
value: 45.41490917650942
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.223
- type: map_at_10
value: 47.947
- type: map_at_100
value: 48.742000000000004
- type: map_at_1000
value: 48.745
- type: map_at_3
value: 43.137
- type: map_at_5
value: 45.992
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 48.4
- type: mrr_at_100
value: 49.202
- type: mrr_at_1000
value: 49.205
- type: mrr_at_3
value: 43.551
- type: mrr_at_5
value: 46.467999999999996
- type: ndcg_at_1
value: 31.223
- type: ndcg_at_10
value: 57.045
- type: ndcg_at_100
value: 60.175
- type: ndcg_at_1000
value: 60.233000000000004
- type: ndcg_at_3
value: 47.171
- type: ndcg_at_5
value: 52.322
- type: precision_at_1
value: 31.223
- type: precision_at_10
value: 8.599
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.63
- type: precision_at_5
value: 14.282
- type: recall_at_1
value: 31.223
- type: recall_at_10
value: 85.989
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 58.89
- type: recall_at_5
value: 71.408
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.1621946393635
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.56417132407894
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.29539304390207
- type: mrr
value: 76.44484017060196
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 84.38746499431112
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.51298701298701
- type: f1
value: 77.49041754069235
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.61848554098577
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.32623280148178
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.803000000000004
- type: map_at_10
value: 48.848
- type: map_at_100
value: 50.5
- type: map_at_1000
value: 50.602999999999994
- type: map_at_3
value: 45.111000000000004
- type: map_at_5
value: 47.202
- type: mrr_at_1
value: 44.635000000000005
- type: mrr_at_10
value: 55.593
- type: mrr_at_100
value: 56.169999999999995
- type: mrr_at_1000
value: 56.19499999999999
- type: mrr_at_3
value: 53.361999999999995
- type: mrr_at_5
value: 54.806999999999995
- type: ndcg_at_1
value: 44.635000000000005
- type: ndcg_at_10
value: 55.899
- type: ndcg_at_100
value: 60.958
- type: ndcg_at_1000
value: 62.302
- type: ndcg_at_3
value: 51.051
- type: ndcg_at_5
value: 53.351000000000006
- type: precision_at_1
value: 44.635000000000005
- type: precision_at_10
value: 10.786999999999999
- type: precision_at_100
value: 1.6580000000000001
- type: precision_at_1000
value: 0.213
- type: precision_at_3
value: 24.893
- type: precision_at_5
value: 17.740000000000002
- type: recall_at_1
value: 35.803000000000004
- type: recall_at_10
value: 68.657
- type: recall_at_100
value: 89.77199999999999
- type: recall_at_1000
value: 97.67
- type: recall_at_3
value: 54.066
- type: recall_at_5
value: 60.788
- type: map_at_1
value: 33.706
- type: map_at_10
value: 44.896
- type: map_at_100
value: 46.299
- type: map_at_1000
value: 46.44
- type: map_at_3
value: 41.721000000000004
- type: map_at_5
value: 43.486000000000004
- type: mrr_at_1
value: 41.592
- type: mrr_at_10
value: 50.529
- type: mrr_at_100
value: 51.22
- type: mrr_at_1000
value: 51.258
- type: mrr_at_3
value: 48.205999999999996
- type: mrr_at_5
value: 49.528
- type: ndcg_at_1
value: 41.592
- type: ndcg_at_10
value: 50.77199999999999
- type: ndcg_at_100
value: 55.383
- type: ndcg_at_1000
value: 57.288
- type: ndcg_at_3
value: 46.324
- type: ndcg_at_5
value: 48.346000000000004
- type: precision_at_1
value: 41.592
- type: precision_at_10
value: 9.516
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 22.399
- type: precision_at_5
value: 15.770999999999999
- type: recall_at_1
value: 33.706
- type: recall_at_10
value: 61.353
- type: recall_at_100
value: 80.182
- type: recall_at_1000
value: 91.896
- type: recall_at_3
value: 48.204
- type: recall_at_5
value: 53.89699999999999
- type: map_at_1
value: 44.424
- type: map_at_10
value: 57.169000000000004
- type: map_at_100
value: 58.202
- type: map_at_1000
value: 58.242000000000004
- type: map_at_3
value: 53.825
- type: map_at_5
value: 55.714
- type: mrr_at_1
value: 50.470000000000006
- type: mrr_at_10
value: 60.489000000000004
- type: mrr_at_100
value: 61.096
- type: mrr_at_1000
value: 61.112
- type: mrr_at_3
value: 58.192
- type: mrr_at_5
value: 59.611999999999995
- type: ndcg_at_1
value: 50.470000000000006
- type: ndcg_at_10
value: 63.071999999999996
- type: ndcg_at_100
value: 66.964
- type: ndcg_at_1000
value: 67.659
- type: ndcg_at_3
value: 57.74399999999999
- type: ndcg_at_5
value: 60.367000000000004
- type: precision_at_1
value: 50.470000000000006
- type: precision_at_10
value: 10.019
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 25.558999999999997
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 44.424
- type: recall_at_10
value: 77.02
- type: recall_at_100
value: 93.738
- type: recall_at_1000
value: 98.451
- type: recall_at_3
value: 62.888
- type: recall_at_5
value: 69.138
- type: map_at_1
value: 26.294
- type: map_at_10
value: 34.503
- type: map_at_100
value: 35.641
- type: map_at_1000
value: 35.724000000000004
- type: map_at_3
value: 31.753999999999998
- type: map_at_5
value: 33.190999999999995
- type: mrr_at_1
value: 28.362
- type: mrr_at_10
value: 36.53
- type: mrr_at_100
value: 37.541000000000004
- type: mrr_at_1000
value: 37.602000000000004
- type: mrr_at_3
value: 33.917
- type: mrr_at_5
value: 35.358000000000004
- type: ndcg_at_1
value: 28.362
- type: ndcg_at_10
value: 39.513999999999996
- type: ndcg_at_100
value: 44.815
- type: ndcg_at_1000
value: 46.839
- type: ndcg_at_3
value: 34.02
- type: ndcg_at_5
value: 36.522
- type: precision_at_1
value: 28.362
- type: precision_at_10
value: 6.101999999999999
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.161999999999999
- type: precision_at_5
value: 9.966
- type: recall_at_1
value: 26.294
- type: recall_at_10
value: 53.098
- type: recall_at_100
value: 76.877
- type: recall_at_1000
value: 91.834
- type: recall_at_3
value: 38.266
- type: recall_at_5
value: 44.287
- type: map_at_1
value: 16.407
- type: map_at_10
value: 25.185999999999996
- type: map_at_100
value: 26.533
- type: map_at_1000
value: 26.657999999999998
- type: map_at_3
value: 22.201999999999998
- type: map_at_5
value: 23.923
- type: mrr_at_1
value: 20.522000000000002
- type: mrr_at_10
value: 29.522
- type: mrr_at_100
value: 30.644
- type: mrr_at_1000
value: 30.713
- type: mrr_at_3
value: 26.679000000000002
- type: mrr_at_5
value: 28.483000000000004
- type: ndcg_at_1
value: 20.522000000000002
- type: ndcg_at_10
value: 30.656
- type: ndcg_at_100
value: 36.864999999999995
- type: ndcg_at_1000
value: 39.675
- type: ndcg_at_3
value: 25.319000000000003
- type: ndcg_at_5
value: 27.992
- type: precision_at_1
value: 20.522000000000002
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 12.396
- type: precision_at_5
value: 9.328
- type: recall_at_1
value: 16.407
- type: recall_at_10
value: 43.164
- type: recall_at_100
value: 69.695
- type: recall_at_1000
value: 89.41900000000001
- type: recall_at_3
value: 28.634999999999998
- type: recall_at_5
value: 35.308
- type: map_at_1
value: 30.473
- type: map_at_10
value: 41.676
- type: map_at_100
value: 43.120999999999995
- type: map_at_1000
value: 43.230000000000004
- type: map_at_3
value: 38.306000000000004
- type: map_at_5
value: 40.355999999999995
- type: mrr_at_1
value: 37.536
- type: mrr_at_10
value: 47.643
- type: mrr_at_100
value: 48.508
- type: mrr_at_1000
value: 48.551
- type: mrr_at_3
value: 45.348
- type: mrr_at_5
value: 46.744
- type: ndcg_at_1
value: 37.536
- type: ndcg_at_10
value: 47.823
- type: ndcg_at_100
value: 53.395
- type: ndcg_at_1000
value: 55.271
- type: ndcg_at_3
value: 42.768
- type: ndcg_at_5
value: 45.373000000000005
- type: precision_at_1
value: 37.536
- type: precision_at_10
value: 8.681
- type: precision_at_100
value: 1.34
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.468
- type: precision_at_5
value: 14.495
- type: recall_at_1
value: 30.473
- type: recall_at_10
value: 60.092999999999996
- type: recall_at_100
value: 82.733
- type: recall_at_1000
value: 94.875
- type: recall_at_3
value: 45.734
- type: recall_at_5
value: 52.691
- type: map_at_1
value: 29.976000000000003
- type: map_at_10
value: 41.097
- type: map_at_100
value: 42.547000000000004
- type: map_at_1000
value: 42.659000000000006
- type: map_at_3
value: 37.251
- type: map_at_5
value: 39.493
- type: mrr_at_1
value: 37.557
- type: mrr_at_10
value: 46.605000000000004
- type: mrr_at_100
value: 47.487
- type: mrr_at_1000
value: 47.54
- type: mrr_at_3
value: 43.721
- type: mrr_at_5
value: 45.411
- type: ndcg_at_1
value: 37.557
- type: ndcg_at_10
value: 47.449000000000005
- type: ndcg_at_100
value: 53.052
- type: ndcg_at_1000
value: 55.010999999999996
- type: ndcg_at_3
value: 41.439
- type: ndcg_at_5
value: 44.292
- type: precision_at_1
value: 37.557
- type: precision_at_10
value: 8.847
- type: precision_at_100
value: 1.357
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 20.091
- type: precision_at_5
value: 14.384
- type: recall_at_1
value: 29.976000000000003
- type: recall_at_10
value: 60.99099999999999
- type: recall_at_100
value: 84.245
- type: recall_at_1000
value: 96.97200000000001
- type: recall_at_3
value: 43.794
- type: recall_at_5
value: 51.778999999999996
- type: map_at_1
value: 28.099166666666665
- type: map_at_10
value: 38.1365
- type: map_at_100
value: 39.44491666666667
- type: map_at_1000
value: 39.55858333333334
- type: map_at_3
value: 35.03641666666666
- type: map_at_5
value: 36.79833333333334
- type: mrr_at_1
value: 33.39966666666667
- type: mrr_at_10
value: 42.42583333333333
- type: mrr_at_100
value: 43.28575
- type: mrr_at_1000
value: 43.33741666666667
- type: mrr_at_3
value: 39.94975
- type: mrr_at_5
value: 41.41633333333334
- type: ndcg_at_1
value: 33.39966666666667
- type: ndcg_at_10
value: 43.81741666666667
- type: ndcg_at_100
value: 49.08166666666667
- type: ndcg_at_1000
value: 51.121166666666674
- type: ndcg_at_3
value: 38.73575
- type: ndcg_at_5
value: 41.18158333333333
- type: precision_at_1
value: 33.39966666666667
- type: precision_at_10
value: 7.738916666666667
- type: precision_at_100
value: 1.2265833333333331
- type: precision_at_1000
value: 0.15983333333333336
- type: precision_at_3
value: 17.967416666666665
- type: precision_at_5
value: 12.78675
- type: recall_at_1
value: 28.099166666666665
- type: recall_at_10
value: 56.27049999999999
- type: recall_at_100
value: 78.93291666666667
- type: recall_at_1000
value: 92.81608333333334
- type: recall_at_3
value: 42.09775
- type: recall_at_5
value: 48.42533333333334
- type: map_at_1
value: 23.663
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.426
- type: map_at_1000
value: 31.519000000000002
- type: map_at_3
value: 28.069
- type: map_at_5
value: 29.256999999999998
- type: mrr_at_1
value: 26.687
- type: mrr_at_10
value: 33.107
- type: mrr_at_100
value: 34.055
- type: mrr_at_1000
value: 34.117999999999995
- type: mrr_at_3
value: 31.058000000000003
- type: mrr_at_5
value: 32.14
- type: ndcg_at_1
value: 26.687
- type: ndcg_at_10
value: 34.615
- type: ndcg_at_100
value: 39.776
- type: ndcg_at_1000
value: 42.05
- type: ndcg_at_3
value: 30.322
- type: ndcg_at_5
value: 32.157000000000004
- type: precision_at_1
value: 26.687
- type: precision_at_10
value: 5.491
- type: precision_at_100
value: 0.877
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.139000000000001
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.663
- type: recall_at_10
value: 45.035
- type: recall_at_100
value: 68.554
- type: recall_at_1000
value: 85.077
- type: recall_at_3
value: 32.982
- type: recall_at_5
value: 37.688
- type: map_at_1
value: 17.403
- type: map_at_10
value: 25.197000000000003
- type: map_at_100
value: 26.355
- type: map_at_1000
value: 26.487
- type: map_at_3
value: 22.733
- type: map_at_5
value: 24.114
- type: mrr_at_1
value: 21.37
- type: mrr_at_10
value: 29.091
- type: mrr_at_100
value: 30.018
- type: mrr_at_1000
value: 30.096
- type: mrr_at_3
value: 26.887
- type: mrr_at_5
value: 28.157
- type: ndcg_at_1
value: 21.37
- type: ndcg_at_10
value: 30.026000000000003
- type: ndcg_at_100
value: 35.416
- type: ndcg_at_1000
value: 38.45
- type: ndcg_at_3
value: 25.764
- type: ndcg_at_5
value: 27.742
- type: precision_at_1
value: 21.37
- type: precision_at_10
value: 5.609
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 12.423
- type: precision_at_5
value: 9.009
- type: recall_at_1
value: 17.403
- type: recall_at_10
value: 40.573
- type: recall_at_100
value: 64.818
- type: recall_at_1000
value: 86.53699999999999
- type: recall_at_3
value: 28.493000000000002
- type: recall_at_5
value: 33.660000000000004
- type: map_at_1
value: 28.639
- type: map_at_10
value: 38.951
- type: map_at_100
value: 40.238
- type: map_at_1000
value: 40.327
- type: map_at_3
value: 35.842
- type: map_at_5
value: 37.617
- type: mrr_at_1
value: 33.769
- type: mrr_at_10
value: 43.088
- type: mrr_at_100
value: 44.03
- type: mrr_at_1000
value: 44.072
- type: mrr_at_3
value: 40.656
- type: mrr_at_5
value: 42.138999999999996
- type: ndcg_at_1
value: 33.769
- type: ndcg_at_10
value: 44.676
- type: ndcg_at_100
value: 50.416000000000004
- type: ndcg_at_1000
value: 52.227999999999994
- type: ndcg_at_3
value: 39.494
- type: ndcg_at_5
value: 42.013
- type: precision_at_1
value: 33.769
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.221
- type: precision_at_5
value: 12.966
- type: recall_at_1
value: 28.639
- type: recall_at_10
value: 57.687999999999995
- type: recall_at_100
value: 82.541
- type: recall_at_1000
value: 94.896
- type: recall_at_3
value: 43.651
- type: recall_at_5
value: 49.925999999999995
- type: map_at_1
value: 29.57
- type: map_at_10
value: 40.004
- type: map_at_100
value: 41.75
- type: map_at_1000
value: 41.97
- type: map_at_3
value: 36.788
- type: map_at_5
value: 38.671
- type: mrr_at_1
value: 35.375
- type: mrr_at_10
value: 45.121
- type: mrr_at_100
value: 45.994
- type: mrr_at_1000
value: 46.04
- type: mrr_at_3
value: 42.227
- type: mrr_at_5
value: 43.995
- type: ndcg_at_1
value: 35.375
- type: ndcg_at_10
value: 46.392
- type: ndcg_at_100
value: 52.196
- type: ndcg_at_1000
value: 54.274
- type: ndcg_at_3
value: 41.163
- type: ndcg_at_5
value: 43.813
- type: precision_at_1
value: 35.375
- type: precision_at_10
value: 8.676
- type: precision_at_100
value: 1.678
- type: precision_at_1000
value: 0.253
- type: precision_at_3
value: 19.104
- type: precision_at_5
value: 13.913
- type: recall_at_1
value: 29.57
- type: recall_at_10
value: 58.779
- type: recall_at_100
value: 83.337
- type: recall_at_1000
value: 95.979
- type: recall_at_3
value: 44.005
- type: recall_at_5
value: 50.975
- type: map_at_1
value: 20.832
- type: map_at_10
value: 29.733999999999998
- type: map_at_100
value: 30.727
- type: map_at_1000
value: 30.843999999999998
- type: map_at_3
value: 26.834999999999997
- type: map_at_5
value: 28.555999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 31.791999999999998
- type: mrr_at_100
value: 32.666000000000004
- type: mrr_at_1000
value: 32.751999999999995
- type: mrr_at_3
value: 29.144
- type: mrr_at_5
value: 30.622
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.744
- type: ndcg_at_1000
value: 42.407000000000004
- type: ndcg_at_3
value: 29.421000000000003
- type: ndcg_at_5
value: 32.211
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 5.675
- type: precision_at_100
value: 0.872
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 12.753999999999998
- type: precision_at_5
value: 9.353
- type: recall_at_1
value: 20.832
- type: recall_at_10
value: 48.795
- type: recall_at_100
value: 70.703
- type: recall_at_1000
value: 90.187
- type: recall_at_3
value: 34.455000000000005
- type: recall_at_5
value: 40.967
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.334
- type: map_at_10
value: 19.009999999999998
- type: map_at_100
value: 21.129
- type: map_at_1000
value: 21.328
- type: map_at_3
value: 15.152
- type: map_at_5
value: 17.084
- type: mrr_at_1
value: 23.453
- type: mrr_at_10
value: 36.099
- type: mrr_at_100
value: 37.069
- type: mrr_at_1000
value: 37.104
- type: mrr_at_3
value: 32.096000000000004
- type: mrr_at_5
value: 34.451
- type: ndcg_at_1
value: 23.453
- type: ndcg_at_10
value: 27.739000000000004
- type: ndcg_at_100
value: 35.836
- type: ndcg_at_1000
value: 39.242
- type: ndcg_at_3
value: 21.263
- type: ndcg_at_5
value: 23.677
- type: precision_at_1
value: 23.453
- type: precision_at_10
value: 9.199
- type: precision_at_100
value: 1.791
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 16.2
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 10.334
- type: recall_at_10
value: 35.177
- type: recall_at_100
value: 63.009
- type: recall_at_1000
value: 81.938
- type: recall_at_3
value: 19.914
- type: recall_at_5
value: 26.077
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.212
- type: map_at_10
value: 17.386
- type: map_at_100
value: 24.234
- type: map_at_1000
value: 25.724999999999998
- type: map_at_3
value: 12.727
- type: map_at_5
value: 14.785
- type: mrr_at_1
value: 59.25
- type: mrr_at_10
value: 68.687
- type: mrr_at_100
value: 69.133
- type: mrr_at_1000
value: 69.14099999999999
- type: mrr_at_3
value: 66.917
- type: mrr_at_5
value: 67.742
- type: ndcg_at_1
value: 48.625
- type: ndcg_at_10
value: 36.675999999999995
- type: ndcg_at_100
value: 41.543
- type: ndcg_at_1000
value: 49.241
- type: ndcg_at_3
value: 41.373
- type: ndcg_at_5
value: 38.707
- type: precision_at_1
value: 59.25
- type: precision_at_10
value: 28.525
- type: precision_at_100
value: 9.027000000000001
- type: precision_at_1000
value: 1.8339999999999999
- type: precision_at_3
value: 44.833
- type: precision_at_5
value: 37.35
- type: recall_at_1
value: 8.212
- type: recall_at_10
value: 23.188
- type: recall_at_100
value: 48.613
- type: recall_at_1000
value: 73.093
- type: recall_at_3
value: 14.419
- type: recall_at_5
value: 17.798
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.725
- type: f1
value: 46.50743309855908
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.086
- type: map_at_10
value: 66.914
- type: map_at_100
value: 67.321
- type: map_at_1000
value: 67.341
- type: map_at_3
value: 64.75800000000001
- type: map_at_5
value: 66.189
- type: mrr_at_1
value: 59.28600000000001
- type: mrr_at_10
value: 71.005
- type: mrr_at_100
value: 71.304
- type: mrr_at_1000
value: 71.313
- type: mrr_at_3
value: 69.037
- type: mrr_at_5
value: 70.35
- type: ndcg_at_1
value: 59.28600000000001
- type: ndcg_at_10
value: 72.695
- type: ndcg_at_100
value: 74.432
- type: ndcg_at_1000
value: 74.868
- type: ndcg_at_3
value: 68.72200000000001
- type: ndcg_at_5
value: 71.081
- type: precision_at_1
value: 59.28600000000001
- type: precision_at_10
value: 9.499
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 27.503
- type: precision_at_5
value: 17.854999999999997
- type: recall_at_1
value: 55.086
- type: recall_at_10
value: 86.453
- type: recall_at_100
value: 94.028
- type: recall_at_1000
value: 97.052
- type: recall_at_3
value: 75.821
- type: recall_at_5
value: 81.6
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.262999999999998
- type: map_at_10
value: 37.488
- type: map_at_100
value: 39.498
- type: map_at_1000
value: 39.687
- type: map_at_3
value: 32.529
- type: map_at_5
value: 35.455
- type: mrr_at_1
value: 44.907000000000004
- type: mrr_at_10
value: 53.239000000000004
- type: mrr_at_100
value: 54.086
- type: mrr_at_1000
value: 54.122
- type: mrr_at_3
value: 51.235
- type: mrr_at_5
value: 52.415
- type: ndcg_at_1
value: 44.907000000000004
- type: ndcg_at_10
value: 45.446
- type: ndcg_at_100
value: 52.429
- type: ndcg_at_1000
value: 55.169000000000004
- type: ndcg_at_3
value: 41.882000000000005
- type: ndcg_at_5
value: 43.178
- type: precision_at_1
value: 44.907000000000004
- type: precision_at_10
value: 12.931999999999999
- type: precision_at_100
value: 2.025
- type: precision_at_1000
value: 0.248
- type: precision_at_3
value: 28.652
- type: precision_at_5
value: 21.204
- type: recall_at_1
value: 22.262999999999998
- type: recall_at_10
value: 52.447
- type: recall_at_100
value: 78.045
- type: recall_at_1000
value: 94.419
- type: recall_at_3
value: 38.064
- type: recall_at_5
value: 44.769
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.519
- type: map_at_10
value: 45.831
- type: map_at_100
value: 46.815
- type: map_at_1000
value: 46.899
- type: map_at_3
value: 42.836
- type: map_at_5
value: 44.65
- type: mrr_at_1
value: 65.037
- type: mrr_at_10
value: 72.16
- type: mrr_at_100
value: 72.51100000000001
- type: mrr_at_1000
value: 72.53
- type: mrr_at_3
value: 70.682
- type: mrr_at_5
value: 71.54599999999999
- type: ndcg_at_1
value: 65.037
- type: ndcg_at_10
value: 55.17999999999999
- type: ndcg_at_100
value: 58.888
- type: ndcg_at_1000
value: 60.648
- type: ndcg_at_3
value: 50.501
- type: ndcg_at_5
value: 52.977
- type: precision_at_1
value: 65.037
- type: precision_at_10
value: 11.530999999999999
- type: precision_at_100
value: 1.4460000000000002
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 31.483
- type: precision_at_5
value: 20.845
- type: recall_at_1
value: 32.519
- type: recall_at_10
value: 57.657000000000004
- type: recall_at_100
value: 72.30199999999999
- type: recall_at_1000
value: 84.024
- type: recall_at_3
value: 47.225
- type: recall_at_5
value: 52.113
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.3168
- type: ap
value: 83.80165516037135
- type: f1
value: 88.29942471066407
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 20.724999999999998
- type: map_at_10
value: 32.736
- type: map_at_100
value: 33.938
- type: map_at_1000
value: 33.991
- type: map_at_3
value: 28.788000000000004
- type: map_at_5
value: 31.016
- type: mrr_at_1
value: 21.361
- type: mrr_at_10
value: 33.323
- type: mrr_at_100
value: 34.471000000000004
- type: mrr_at_1000
value: 34.518
- type: mrr_at_3
value: 29.453000000000003
- type: mrr_at_5
value: 31.629
- type: ndcg_at_1
value: 21.361
- type: ndcg_at_10
value: 39.649
- type: ndcg_at_100
value: 45.481
- type: ndcg_at_1000
value: 46.775
- type: ndcg_at_3
value: 31.594
- type: ndcg_at_5
value: 35.543
- type: precision_at_1
value: 21.361
- type: precision_at_10
value: 6.3740000000000006
- type: precision_at_100
value: 0.931
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.514999999999999
- type: precision_at_5
value: 10.100000000000001
- type: recall_at_1
value: 20.724999999999998
- type: recall_at_10
value: 61.034
- type: recall_at_100
value: 88.062
- type: recall_at_1000
value: 97.86399999999999
- type: recall_at_3
value: 39.072
- type: recall_at_5
value: 48.53
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.8919288645691
- type: f1
value: 93.57059586398059
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.97993616051072
- type: f1
value: 48.244319183606535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.90047074646941
- type: f1
value: 66.48999056063725
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.34566240753195
- type: f1
value: 73.54164154290658
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.21866934757011
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.000936217235534
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.68189362520352
- type: mrr
value: 32.69603637784303
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.078
- type: map_at_10
value: 12.671
- type: map_at_100
value: 16.291
- type: map_at_1000
value: 17.855999999999998
- type: map_at_3
value: 9.610000000000001
- type: map_at_5
value: 11.152
- type: mrr_at_1
value: 43.963
- type: mrr_at_10
value: 53.173
- type: mrr_at_100
value: 53.718999999999994
- type: mrr_at_1000
value: 53.756
- type: mrr_at_3
value: 50.980000000000004
- type: mrr_at_5
value: 52.42
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.086
- type: ndcg_at_100
value: 32.545
- type: ndcg_at_1000
value: 41.144999999999996
- type: ndcg_at_3
value: 39.434999999999995
- type: ndcg_at_5
value: 37.888
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.014999999999997
- type: precision_at_100
value: 8.594
- type: precision_at_1000
value: 2.169
- type: precision_at_3
value: 37.049
- type: precision_at_5
value: 33.065
- type: recall_at_1
value: 6.078
- type: recall_at_10
value: 16.17
- type: recall_at_100
value: 34.512
- type: recall_at_1000
value: 65.447
- type: recall_at_3
value: 10.706
- type: recall_at_5
value: 13.158
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.378000000000004
- type: map_at_10
value: 42.178
- type: map_at_100
value: 43.32
- type: map_at_1000
value: 43.358000000000004
- type: map_at_3
value: 37.474000000000004
- type: map_at_5
value: 40.333000000000006
- type: mrr_at_1
value: 30.823
- type: mrr_at_10
value: 44.626
- type: mrr_at_100
value: 45.494
- type: mrr_at_1000
value: 45.519
- type: mrr_at_3
value: 40.585
- type: mrr_at_5
value: 43.146
- type: ndcg_at_1
value: 30.794
- type: ndcg_at_10
value: 50.099000000000004
- type: ndcg_at_100
value: 54.900999999999996
- type: ndcg_at_1000
value: 55.69499999999999
- type: ndcg_at_3
value: 41.238
- type: ndcg_at_5
value: 46.081
- type: precision_at_1
value: 30.794
- type: precision_at_10
value: 8.549
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 18.926000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 27.378000000000004
- type: recall_at_10
value: 71.842
- type: recall_at_100
value: 92.565
- type: recall_at_1000
value: 98.402
- type: recall_at_3
value: 49.053999999999995
- type: recall_at_5
value: 60.207
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.557
- type: map_at_10
value: 84.729
- type: map_at_100
value: 85.369
- type: map_at_1000
value: 85.382
- type: map_at_3
value: 81.72
- type: map_at_5
value: 83.613
- type: mrr_at_1
value: 81.3
- type: mrr_at_10
value: 87.488
- type: mrr_at_100
value: 87.588
- type: mrr_at_1000
value: 87.589
- type: mrr_at_3
value: 86.53
- type: mrr_at_5
value: 87.18599999999999
- type: ndcg_at_1
value: 81.28999999999999
- type: ndcg_at_10
value: 88.442
- type: ndcg_at_100
value: 89.637
- type: ndcg_at_1000
value: 89.70700000000001
- type: ndcg_at_3
value: 85.55199999999999
- type: ndcg_at_5
value: 87.154
- type: precision_at_1
value: 81.28999999999999
- type: precision_at_10
value: 13.489999999999998
- type: precision_at_100
value: 1.54
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.708
- type: recall_at_1
value: 70.557
- type: recall_at_10
value: 95.645
- type: recall_at_100
value: 99.693
- type: recall_at_1000
value: 99.995
- type: recall_at_3
value: 87.359
- type: recall_at_5
value: 91.89699999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 63.65060114776209
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.63271250680617
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.263
- type: map_at_10
value: 10.801
- type: map_at_100
value: 12.888
- type: map_at_1000
value: 13.224
- type: map_at_3
value: 7.362
- type: map_at_5
value: 9.149000000000001
- type: mrr_at_1
value: 21
- type: mrr_at_10
value: 31.416
- type: mrr_at_100
value: 32.513
- type: mrr_at_1000
value: 32.58
- type: mrr_at_3
value: 28.116999999999997
- type: mrr_at_5
value: 29.976999999999997
- type: ndcg_at_1
value: 21
- type: ndcg_at_10
value: 18.551000000000002
- type: ndcg_at_100
value: 26.657999999999998
- type: ndcg_at_1000
value: 32.485
- type: ndcg_at_3
value: 16.834
- type: ndcg_at_5
value: 15.204999999999998
- type: precision_at_1
value: 21
- type: precision_at_10
value: 9.84
- type: precision_at_100
value: 2.16
- type: precision_at_1000
value: 0.35500000000000004
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 13.62
- type: recall_at_1
value: 4.263
- type: recall_at_10
value: 19.922
- type: recall_at_100
value: 43.808
- type: recall_at_1000
value: 72.14500000000001
- type: recall_at_3
value: 9.493
- type: recall_at_5
value: 13.767999999999999
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 81.27446313317233
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 76.27963301217527
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 88.18495048450949
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 81.91982338692046
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 89.00896818385291
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 85.48814644586132
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 90.30116926966582
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 67.74132963032342
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 86.87741355780479
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.0019012295875
- type: mrr
value: 94.70267024188593
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 50.05
- type: map_at_10
value: 59.36
- type: map_at_100
value: 59.967999999999996
- type: map_at_1000
value: 60.023
- type: map_at_3
value: 56.515
- type: map_at_5
value: 58.272999999999996
- type: mrr_at_1
value: 53
- type: mrr_at_10
value: 61.102000000000004
- type: mrr_at_100
value: 61.476
- type: mrr_at_1000
value: 61.523
- type: mrr_at_3
value: 58.778
- type: mrr_at_5
value: 60.128
- type: ndcg_at_1
value: 53
- type: ndcg_at_10
value: 64.43100000000001
- type: ndcg_at_100
value: 66.73599999999999
- type: ndcg_at_1000
value: 68.027
- type: ndcg_at_3
value: 59.279
- type: ndcg_at_5
value: 61.888
- type: precision_at_1
value: 53
- type: precision_at_10
value: 8.767
- type: precision_at_100
value: 1.01
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 23.444000000000003
- type: precision_at_5
value: 15.667
- type: recall_at_1
value: 50.05
- type: recall_at_10
value: 78.511
- type: recall_at_100
value: 88.5
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 64.117
- type: recall_at_5
value: 70.867
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72178217821782
- type: cos_sim_ap
value: 93.0728601593541
- type: cos_sim_f1
value: 85.6727976766699
- type: cos_sim_precision
value: 83.02063789868667
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.72178217821782
- type: dot_ap
value: 93.07287396168348
- type: dot_f1
value: 85.6727976766699
- type: dot_precision
value: 83.02063789868667
- type: dot_recall
value: 88.5
- type: euclidean_accuracy
value: 99.72178217821782
- type: euclidean_ap
value: 93.07285657982895
- type: euclidean_f1
value: 85.6727976766699
- type: euclidean_precision
value: 83.02063789868667
- type: euclidean_recall
value: 88.5
- type: manhattan_accuracy
value: 99.72475247524753
- type: manhattan_ap
value: 93.02792973059809
- type: manhattan_f1
value: 85.7727737973388
- type: manhattan_precision
value: 87.84067085953879
- type: manhattan_recall
value: 83.8
- type: max_accuracy
value: 99.72475247524753
- type: max_ap
value: 93.07287396168348
- type: max_f1
value: 85.7727737973388
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.77583615550819
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.151636938606956
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.16607939471187
- type: mrr
value: 52.95172046091163
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.314646669495666
- type: cos_sim_spearman
value: 31.83562491439455
- type: dot_pearson
value: 31.314590842874157
- type: dot_spearman
value: 31.83363065810437
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.3010000000000002
- type: map_at_100
value: 7.2139999999999995
- type: map_at_1000
value: 20.179
- type: map_at_3
value: 0.528
- type: map_at_5
value: 0.8019999999999999
- type: mrr_at_1
value: 72
- type: mrr_at_10
value: 83.39999999999999
- type: mrr_at_100
value: 83.39999999999999
- type: mrr_at_1000
value: 83.39999999999999
- type: mrr_at_3
value: 81.667
- type: mrr_at_5
value: 83.06700000000001
- type: ndcg_at_1
value: 66
- type: ndcg_at_10
value: 58.059000000000005
- type: ndcg_at_100
value: 44.316
- type: ndcg_at_1000
value: 43.147000000000006
- type: ndcg_at_3
value: 63.815999999999995
- type: ndcg_at_5
value: 63.005
- type: precision_at_1
value: 72
- type: precision_at_10
value: 61.4
- type: precision_at_100
value: 45.62
- type: precision_at_1000
value: 19.866
- type: precision_at_3
value: 70
- type: precision_at_5
value: 68.8
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.517
- type: recall_at_100
value: 10.587
- type: recall_at_1000
value: 41.233
- type: recall_at_3
value: 0.573
- type: recall_at_5
value: 0.907
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.894
- type: map_at_10
value: 8.488999999999999
- type: map_at_100
value: 14.445
- type: map_at_1000
value: 16.078
- type: map_at_3
value: 4.589
- type: map_at_5
value: 6.019
- type: mrr_at_1
value: 22.448999999999998
- type: mrr_at_10
value: 39.82
- type: mrr_at_100
value: 40.752
- type: mrr_at_1000
value: 40.771
- type: mrr_at_3
value: 34.354
- type: mrr_at_5
value: 37.721
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 21.563
- type: ndcg_at_100
value: 33.857
- type: ndcg_at_1000
value: 46.199
- type: ndcg_at_3
value: 22.296
- type: ndcg_at_5
value: 21.770999999999997
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.142999999999999
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 22.448999999999998
- type: recall_at_1
value: 1.894
- type: recall_at_10
value: 14.931
- type: recall_at_100
value: 45.524
- type: recall_at_1000
value: 83.243
- type: recall_at_3
value: 5.712
- type: recall_at_5
value: 8.386000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.049
- type: ap
value: 13.85116971310922
- type: f1
value: 54.37504302487686
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.1312959818902
- type: f1
value: 64.11413877009383
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 54.13103431861502
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.327889372355
- type: cos_sim_ap
value: 77.42059895975699
- type: cos_sim_f1
value: 71.02706903250873
- type: cos_sim_precision
value: 69.75324344950394
- type: cos_sim_recall
value: 72.34828496042216
- type: dot_accuracy
value: 87.327889372355
- type: dot_ap
value: 77.4209479346677
- type: dot_f1
value: 71.02706903250873
- type: dot_precision
value: 69.75324344950394
- type: dot_recall
value: 72.34828496042216
- type: euclidean_accuracy
value: 87.327889372355
- type: euclidean_ap
value: 77.42096495861037
- type: euclidean_f1
value: 71.02706903250873
- type: euclidean_precision
value: 69.75324344950394
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.31000774870358
- type: manhattan_ap
value: 77.38930750711619
- type: manhattan_f1
value: 71.07935314027831
- type: manhattan_precision
value: 67.70957726295677
- type: manhattan_recall
value: 74.80211081794195
- type: max_accuracy
value: 87.327889372355
- type: max_ap
value: 77.42096495861037
- type: max_f1
value: 71.07935314027831
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.58939729110878
- type: cos_sim_ap
value: 87.17594155025475
- type: cos_sim_f1
value: 79.21146953405018
- type: cos_sim_precision
value: 76.8918527109307
- type: cos_sim_recall
value: 81.67539267015707
- type: dot_accuracy
value: 89.58939729110878
- type: dot_ap
value: 87.17593963273593
- type: dot_f1
value: 79.21146953405018
- type: dot_precision
value: 76.8918527109307
- type: dot_recall
value: 81.67539267015707
- type: euclidean_accuracy
value: 89.58939729110878
- type: euclidean_ap
value: 87.17592466925834
- type: euclidean_f1
value: 79.21146953405018
- type: euclidean_precision
value: 76.8918527109307
- type: euclidean_recall
value: 81.67539267015707
- type: manhattan_accuracy
value: 89.62626615438352
- type: manhattan_ap
value: 87.16589873161546
- type: manhattan_f1
value: 79.25143598295348
- type: manhattan_precision
value: 76.39494177323712
- type: manhattan_recall
value: 82.32984293193716
- type: max_accuracy
value: 89.62626615438352
- type: max_ap
value: 87.17594155025475
- type: max_f1
value: 79.25143598295348
---
# hkunlp/instructor-large
We introduce **Instructor**👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨 achieves sota on 70 diverse embedding tasks ([MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard))!
The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)!
**************************** **Updates** ****************************
* 12/28: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-large) trained with hard negatives, which gives better performance.
* 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-large) and [project page](https://instructor-embedding.github.io/)! Check them out!
## Quick start
<hr />
## Installation
```bash
pip install InstructorEmbedding
```
## Compute your customized embeddings
Then you can use the model like this to calculate domain-specific and task-aware embeddings:
```python
from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-large')
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
instruction = "Represent the Science title:"
embeddings = model.encode([[instruction,sentence]])
print(embeddings)
```
## Use cases
<hr />
## Calculate embeddings for your customized texts
If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions:
Represent the `domain` `text_type` for `task_objective`:
* `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
* `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
* `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc.
## Calculate Sentence similarities
You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
```python
from sklearn.metrics.pairwise import cosine_similarity
sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'],
['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']]
sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'],
['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']]
embeddings_a = model.encode(sentences_a)
embeddings_b = model.encode(sentences_b)
similarities = cosine_similarity(embeddings_a,embeddings_b)
print(similarities)
```
## Information Retrieval
You can also use **customized embeddings** for information retrieval.
```python
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']]
corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'],
['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"],
['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']]
query_embeddings = model.encode(query)
corpus_embeddings = model.encode(corpus)
similarities = cosine_similarity(query_embeddings,corpus_embeddings)
retrieved_doc_id = np.argmax(similarities)
print(retrieved_doc_id)
```
## Clustering
Use **customized embeddings** for clustering texts in groups.
```python
import sklearn.cluster
sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'],
['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'],
['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'],
['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"],
['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']]
embeddings = model.encode(sentences)
clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2)
clustering_model.fit(embeddings)
cluster_assignment = clustering_model.labels_
print(cluster_assignment)
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
GPT4All-Community/Phi-3.1-mini-128k-instruct-GGUF | GPT4All-Community | text-generation | [
"transformers",
"gguf",
"text-generation-inference",
"GGUF",
"GPT4All-community",
"GPT4All",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"region:us",
"conversational"
] | 1,722 | 1,723 | 1,032 | 2 | ---
base_model: Microsoft/Phi-3-Mini-128K-Instruct
language:
- en
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
model_name: Phi-3-Mini-128K-Instruct
pipeline_tag: text-generation
tags:
- text-generation-inference
- transformers
- GGUF
- GPT4All-community
- GPT4All
- nlp
- code
inference: false
model_creator: Microsoft
model_type: phi3
quantized_by: ThiloteE
---
> [!NOTE]
> This is a model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk.
# About
<!-- ### quantize_version: 3 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
- Static quants of https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/ at commit [d548c23](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/commit/d548c233192db00165d842bf8edff054bb3212f8)
- Quantized by [ThiloteE](https://huggingface.co/ThiloteE) with llama.cpp commit [c3776ca](https://github.com/ggerganov/llama.cpp/commit/c3776cacabce2ee35f172fb72be7a519752125fa)
# Notes
These quants were created with a customized configuration that have been proven to not cause visible end of string (eos) tokens during inference with [GPT4All](https://www.nomic.ai/gpt4all).
The config.json, generation_config.json and tokenizer_config.json differ from the original configuration as can be found in the original model's repository at the time of creation of these quants.
# Prompt Template (for GPT4All)
Example System Prompt:
```Markdown
<|system|>
You are a helpful assistant.<|end|>
```
Chat Template:
```Markdown
<|user|>
%1<|end|>
<|assistant|>
%2<|end|>
```
Do not miss the newlines at the end! Have a look at the raw readme.md file, as it differs from the rendered output in the modelcard.
# Context Length
`131072`
Use a lower value during inference, if you do not have enough RAM or VRAM.
# Provided Quants
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/GPT4All-Community/Phi-3-Mini-128K-Instruct-GGUF/resolve/main/Phi-3-Mini-128K-Instruct-Q4_0.gguf) | Q4_0 | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/GPT4All-Community/Phi-3-Mini-128K-Instruct-GGUF/resolve/main/Phi-3-Mini-128K-Instruct-F16.gguf) | f16 | 7.7 | 16 bpw, overkill |
# About GGUF
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) for
more details, including on how to concatenate multi-part files.
Here is a handy graph by ikawrakow comparing some quant types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
# Thanks
I thank Mradermacher and TheBloke for Inspiration to this model card and their contributions to open source. Also 3Simplex for lots of help along the way.
Shoutout to the GPT4All and llama.cpp communities :-)
------
<!-- footer end -->
<!-- original-model-card start -->
------
------
# Original Model card:
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) <br>
📖 [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) <br>
🛠️ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3)
| | Short Context | Long Context |
| :- | :- | :- |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## Release Notes
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback.
The model used additional post-training data leading to substantial gains on long-context understanding, instruction following, and structure output.
We also improve multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability.
We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications.
We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
These tables below highlights improvements on instruction following, structure output, reasoning, and long-context understanding of the new release on our public and internal benchmark datasets.
| Benchmarks | Original | June 2024 Update |
| :- | :- | :- |
| Instruction Extra Hard | 5.7 | 5.9 |
| Instruction Hard | 5.0 | 5.2 |
| JSON Structure Output | 1.9 | 60.1 |
| XML Structure Output | 47.8 | 52.9 |
| GPQA | 25.9 | 29.7 |
| MMLU | 68.1 | 69.7 |
| **Average** | **25.7** | **37.3** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
| :-------------------| :------| :------| :------| :------| :------| :------| :---------|
| Original | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | **68.8** |
| June 2024 Update | 92.4 | 91.1 | 90.8 | 87.9 | 79.8 | 65.6 | **84.6** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
| :-------------------| :--------| :-----| :------| :------| :------------| :---------|
| Original | 27 | 29 | 40 | 33 | 33 | **32.4** |
| June 2024 Update | 85 | 63 | 72 | 93 | 72 | **77** |
Notes: if users would like to check out the previous version, use the git commit id **bb5bf1e4001277a606e11debca0ef80323e5f824**. For the model conversion, e.g. GGUF and other formats, we invite the community to experiment with various approaches and share your valuable feedback. Let's innovate together!
## How to Use
Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.3) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.41.2
```
Phi-3 Mini-128K-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3)
### Tokenizer
Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
Question?<|end|>
<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful travel assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-128k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128K tokens
* GPUs: 512 H100-80G
* Training time: 10 days
* Training data: 4.9T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between May and June 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: June, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.9 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| Category | Benchmark | Phi-3-Mini-128K-Ins | Gemma-7B | Mistral-7B | Mixtral-8x7B | Llama-3-8B-Ins | GPT3.5-Turbo-1106 |
| :----------| :-----------| :---------------------| :----------| :------------| :--------------| :----------------| :-------------------|
| Popular aggregated benchmark | AGI Eval <br>5-shot| 39.5 | 42.1 | 35.1 | 45.2 | 42 | 48.4 |
| | MMLU <br>5-shot | 69.7 | 63.6 | 61.7 | 70.5 | 66.5 | 71.4 |
| | BigBench Hard <br>3-shot | 72.1 | 59.6 | 57.3 | 69.7 | 51.5 | 68.3 |
| Language Understanding | ANLI <br>7-shot | 52.3 | 48.7 | 47.1 | 55.2 | 57.3 | 58.1 |
| | HellaSwag <br>5-shot | 70.5 | 49.8 | 58.5 | 70.4 | 71.1 | 78.8 |
| Reasoning | ARC Challenge <br>10-shot | 85.5 | 78.3 | 78.6 | 87.3 | 82.8 | 87.4 |
| | BoolQ <br>0-shot | 77.1 | 66 | 72.2 | 76.6 | 80.9 | 79.1 |
| | MedQA <br>2-shot | 56.4 | 49.6 | 50 | 62.2 | 60.5 | 63.4 |
| | OpenBookQA <br>10-shot | 78.8 | 78.6 | 79.8 | 85.8 | 82.6 | 86 |
| | PIQA <br>5-shot | 80.1 | 78.1 | 77.7 | 86 | 75.7 | 86.6 |
| | GPQA <br>0-shot | 29.7 | 2.9 | 15 | 6.9 | 32.4 | 29.9 |
| | Social IQA <br>5-shot | 74.7 | 65.5 | 74.6 | 75.9 | 73.9 | 68.3 |
| | TruthfulQA (MC2) <br>10-shot | 64.8 | 52.1 | 53 | 60.1 | 63.2 | 67.7 |
| | WinoGrande <br>5-shot | 71.0 | 55.6 | 54.2 | 62 | 65 | 68.8 |
| Factual Knowledge | TriviaQA <br>5-shot | 57.8 | 72.3 | 75.2 | 82.2 | 67.7 | 85.8 |
| Math | GSM8K CoTT <br>8-shot | 85.3 | 59.8 | 46.4 | 64.7 | 77.4 | 78.1 |
| Code Generation | HumanEval <br>0-shot | 60.4 | 34.1 | 28.0 | 37.8 | 60.4 | 62.2 |
| | MBPP <br>3-shot | 70.0 | 51.5 | 50.8 | 60.2 | 67.7 | 77.8 |
| **Average** | | **66.4** | **56.0** | **56.4** | **64.4** | **65.5** | **70.3** |
**Long Context**: Phi-3 Mini-128K-Instruct supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA.
| Benchmark | Phi-3 Mini-128K-Instruct | Mistral-7B | Mixtral 8x7B | LLaMA-3-8B-Instruct |
| :---------------| :--------------------------|:------------|:--------------|:---------------------|
| GovReport | 25.3 | 4.9 | 20.3 | 10.3 |
| QMSum | 21.9 | 15.5 | 20.6 | 2.9 |
| Qasper | 41.6 | 23.5 | 26.6 | 8.1 |
| SQuALITY | 24.1 | 14.7 | 16.2 | 25 |
| SummScreenFD | 16.8 | 9.3 | 11.3 | 5.1 |
| **Average** | **25.9** | **13.6** | **19.0** | **10.3** |
We take a closer look at different categories across 100 public benchmark datasets at the table below:
| Category | Phi-3-Mini-128K-Instruct | Gemma-7B | Mistral-7B | Mixtral 8x7B | Llama-3-8B-Instruct | GPT-3.5-Turbo |
|:----------|:--------------------------|:----------|:------------|:--------------|:---------------------|:---------------|
| Popular aggregated benchmark | 60.6 | 59.4 | 56.5 | 66.2 | 59.9 | 67.0 |
| Reasoning | 69.4 | 60.3 | 62.8 | 68.1 | 69.6 | 71.7 |
| Language understanding | 57.5 | 57.6 | 52.5 | 66.1 | 63.2 | 67.7 |
| Code generation | 61.0 | 45.6 | 42.9 | 52.7 | 56.4 | 70.4 |
| Math | 51.6 | 35.8 | 25.4 | 40.3 | 41.1 | 52.8 |
| Factual knowledge | 35.8 | 46.7 | 49.8 | 58.6 | 43.1 | 63.4 |
| Multilingual | 56.4 | 66.5 | 57.4 | 66.7 | 66.6 | 71.0 |
| Robustness | 61.1 | 38.4 | 40.6 | 51.0 | 64.5 | 69.3 |
Overall, the model with only 3.8B-param achieves a similar level of language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on TriviaQA. However, we believe such weakness can be resolved by augmenting Phi-3-Mini with a search engine.
## Cross Platform Support
[ONNX runtime](https://onnxruntime.ai/blogs/accelerating-phi-3) now supports Phi-3 mini models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 mini across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3 Mini-128K-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
<!-- original-model-card end -->
<!-- end -->
| [
"SUMMARIZATION"
] | [
"MEDQA"
] | Non_BioNLP |
tensorblock/gte-Qwen2-1.5B-instruct-GGUF | tensorblock | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"TensorBlock",
"GGUF",
"base_model:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,731 | 1,731 | 114 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- TensorBlock
- GGUF
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.511868162026175
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 45.007803189284004
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.20754608934859
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 38.818037697335505
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 39.386760057101945
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.89687154075537
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82153952668092
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.094465801879295
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.65446577183913
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 46.30749237193961
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.581627240203474
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 45.21317724305628
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 42.49825170976724
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 68.23769904483508
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 62.50294403136556
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 44.594104491193555
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 70.55290063940157
- type: v_measure
value: 55.41500719337263
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 28.301882091023288
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 45.26992995191701
- type: v_measure
value: 42.773174876871145
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 71.04138999801822
- type: v_measure
value: 71.7056263158008
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## Alibaba-NLP/gte-Qwen2-1.5B-instruct - GGUF
This repo contains GGUF format model files for [Alibaba-NLP/gte-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [gte-Qwen2-1.5B-instruct-Q2_K.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q2_K.gguf) | Q2_K | 0.701 GB | smallest, significant quality loss - not recommended for most purposes |
| [gte-Qwen2-1.5B-instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q3_K_S.gguf) | Q3_K_S | 0.802 GB | very small, high quality loss |
| [gte-Qwen2-1.5B-instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q3_K_M.gguf) | Q3_K_M | 0.860 GB | very small, high quality loss |
| [gte-Qwen2-1.5B-instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q3_K_L.gguf) | Q3_K_L | 0.913 GB | small, substantial quality loss |
| [gte-Qwen2-1.5B-instruct-Q4_0.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q4_0.gguf) | Q4_0 | 0.992 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [gte-Qwen2-1.5B-instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q4_K_S.gguf) | Q4_K_S | 0.997 GB | small, greater quality loss |
| [gte-Qwen2-1.5B-instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q4_K_M.gguf) | Q4_K_M | 1.040 GB | medium, balanced quality - recommended |
| [gte-Qwen2-1.5B-instruct-Q5_0.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q5_0.gguf) | Q5_0 | 1.172 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [gte-Qwen2-1.5B-instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q5_K_S.gguf) | Q5_K_S | 1.172 GB | large, low quality loss - recommended |
| [gte-Qwen2-1.5B-instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q5_K_M.gguf) | Q5_K_M | 1.197 GB | large, very low quality loss - recommended |
| [gte-Qwen2-1.5B-instruct-Q6_K.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q6_K.gguf) | Q6_K | 1.363 GB | very large, extremely low quality loss |
| [gte-Qwen2-1.5B-instruct-Q8_0.gguf](https://huggingface.co/tensorblock/gte-Qwen2-1.5B-instruct-GGUF/blob/main/gte-Qwen2-1.5B-instruct-Q8_0.gguf) | Q8_0 | 1.764 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/gte-Qwen2-1.5B-instruct-GGUF --include "gte-Qwen2-1.5B-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/gte-Qwen2-1.5B-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
croissantllm/base_120k | croissantllm | text2text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705 | 1,706 | 5 | 0 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
license: mit
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (120k steps)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 120k steps (1.89 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
Coming soon
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/base_120k"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.
He is heading to the market. -> Il va au marché.
We are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
```
| [
"TRANSLATION"
] | [
"CRAFT"
] | Non_BioNLP |
minishlab/potion-base-8M | minishlab | null | [
"model2vec",
"onnx",
"safetensors",
"embeddings",
"static-embeddings",
"mteb",
"sentence-transformers",
"license:mit",
"model-index",
"region:us"
] | 1,730 | 1,737 | 325,384 | 50 | ---
library_name: model2vec
license: mit
tags:
- embeddings
- static-embeddings
- mteb
- sentence-transformers
model-index:
- name: potion-base-8M
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.15142428785607
- type: ap
value: 20.626102291010103
- type: ap_weighted
value: 20.626102291010103
- type: f1
value: 59.187001923736894
- type: f1_weighted
value: 77.34906471545477
- type: main_score
value: 72.15142428785607
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.7910447761194
- type: ap
value: 33.038020188116036
- type: ap_weighted
value: 33.038020188116036
- type: f1
value: 65.03799728338926
- type: f1_weighted
value: 74.32788084269461
- type: main_score
value: 71.7910447761194
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 72.47644999999999
- type: ap
value: 66.91002822830875
- type: ap_weighted
value: 66.91002822830875
- type: f1
value: 72.2600863044581
- type: f1_weighted
value: 72.2600863044581
- type: main_score
value: 72.47644999999999
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 36.012
- type: f1
value: 35.38209336470206
- type: f1_weighted
value: 35.38209336470206
- type: main_score
value: 36.012
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 41.966
- type: map_at_1
value: 21.124000000000002
- type: map_at_10
value: 34.335
- type: map_at_100
value: 35.618
- type: map_at_1000
value: 35.653
- type: map_at_20
value: 35.21
- type: map_at_3
value: 30.287
- type: map_at_5
value: 32.364
- type: mrr_at_1
value: 21.62162162162162
- type: mrr_at_10
value: 34.509104969631224
- type: mrr_at_100
value: 35.79229946325059
- type: mrr_at_1000
value: 35.82767320968403
- type: mrr_at_20
value: 35.38485605181455
- type: mrr_at_3
value: 30.405405405405343
- type: mrr_at_5
value: 32.539118065433755
- type: nauc_map_at_1000_diff1
value: 7.960826255212609
- type: nauc_map_at_1000_max
value: -0.036381315067780806
- type: nauc_map_at_1000_std
value: 4.317766293607543
- type: nauc_map_at_100_diff1
value: 7.96318422584977
- type: nauc_map_at_100_max
value: -0.007800758201736421
- type: nauc_map_at_100_std
value: 4.362078927714198
- type: nauc_map_at_10_diff1
value: 7.718022643886373
- type: nauc_map_at_10_max
value: -0.28312250079415263
- type: nauc_map_at_10_std
value: 4.079196099329437
- type: nauc_map_at_1_diff1
value: 9.240393281366906
- type: nauc_map_at_1_max
value: -4.35798405693968
- type: nauc_map_at_1_std
value: 1.5076565659508505
- type: nauc_map_at_20_diff1
value: 8.028053857747947
- type: nauc_map_at_20_max
value: 0.0719807687813251
- type: nauc_map_at_20_std
value: 4.394812024847373
- type: nauc_map_at_3_diff1
value: 7.953781299828595
- type: nauc_map_at_3_max
value: -0.573072664182506
- type: nauc_map_at_3_std
value: 3.110821611511372
- type: nauc_map_at_5_diff1
value: 7.3135486297676415
- type: nauc_map_at_5_max
value: -1.2456304709603878
- type: nauc_map_at_5_std
value: 3.2332006196074805
- type: nauc_mrr_at_1000_diff1
value: 6.511595076207588
- type: nauc_mrr_at_1000_max
value: -0.4777573692286575
- type: nauc_mrr_at_1000_std
value: 4.19518565742107
- type: nauc_mrr_at_100_diff1
value: 6.515632481906436
- type: nauc_mrr_at_100_max
value: -0.44877259463397945
- type: nauc_mrr_at_100_std
value: 4.23945026873963
- type: nauc_mrr_at_10_diff1
value: 6.325261150908693
- type: nauc_mrr_at_10_max
value: -0.6968688229450172
- type: nauc_mrr_at_10_std
value: 3.9631303923167294
- type: nauc_mrr_at_1_diff1
value: 7.4844946822832785
- type: nauc_mrr_at_1_max
value: -4.0195803039697315
- type: nauc_mrr_at_1_std
value: 1.3908984330415426
- type: nauc_mrr_at_20_diff1
value: 6.596479652899773
- type: nauc_mrr_at_20_max
value: -0.3643520262705732
- type: nauc_mrr_at_20_std
value: 4.273437423781988
- type: nauc_mrr_at_3_diff1
value: 6.3669450211955745
- type: nauc_mrr_at_3_max
value: -1.2252447747465325
- type: nauc_mrr_at_3_std
value: 2.941708547001192
- type: nauc_mrr_at_5_diff1
value: 5.907234785613739
- type: nauc_mrr_at_5_max
value: -1.6860364992754489
- type: nauc_mrr_at_5_std
value: 3.0737345356263406
- type: nauc_ndcg_at_1000_diff1
value: 7.9706658500975704
- type: nauc_ndcg_at_1000_max
value: 1.5533941879318276
- type: nauc_ndcg_at_1000_std
value: 5.933724413159287
- type: nauc_ndcg_at_100_diff1
value: 8.107414913432397
- type: nauc_ndcg_at_100_max
value: 2.5869418793842778
- type: nauc_ndcg_at_100_std
value: 7.322146884970876
- type: nauc_ndcg_at_10_diff1
value: 7.669807780113455
- type: nauc_ndcg_at_10_max
value: 1.886214180834648
- type: nauc_ndcg_at_10_std
value: 6.055781567147952
- type: nauc_ndcg_at_1_diff1
value: 9.240393281366906
- type: nauc_ndcg_at_1_max
value: -4.35798405693968
- type: nauc_ndcg_at_1_std
value: 1.5076565659508505
- type: nauc_ndcg_at_20_diff1
value: 8.661303229272372
- type: nauc_ndcg_at_20_max
value: 3.303174862536166
- type: nauc_ndcg_at_20_std
value: 7.493758825967179
- type: nauc_ndcg_at_3_diff1
value: 7.858281169135036
- type: nauc_ndcg_at_3_max
value: 0.7079724865506055
- type: nauc_ndcg_at_3_std
value: 3.7402042497720958
- type: nauc_ndcg_at_5_diff1
value: 6.68694262946663
- type: nauc_ndcg_at_5_max
value: -0.43002529778264326
- type: nauc_ndcg_at_5_std
value: 3.9597009492387265
- type: nauc_precision_at_1000_diff1
value: -28.217119971169463
- type: nauc_precision_at_1000_max
value: 17.425278660692022
- type: nauc_precision_at_1000_std
value: 46.7473304347162
- type: nauc_precision_at_100_diff1
value: 8.738254686624805
- type: nauc_precision_at_100_max
value: 32.88945783040687
- type: nauc_precision_at_100_std
value: 48.42583030760342
- type: nauc_precision_at_10_diff1
value: 7.873361516017592
- type: nauc_precision_at_10_max
value: 9.802552072953949
- type: nauc_precision_at_10_std
value: 13.506647301311148
- type: nauc_precision_at_1_diff1
value: 9.240393281366906
- type: nauc_precision_at_1_max
value: -4.35798405693968
- type: nauc_precision_at_1_std
value: 1.5076565659508505
- type: nauc_precision_at_20_diff1
value: 13.008220519097161
- type: nauc_precision_at_20_max
value: 20.829507014709748
- type: nauc_precision_at_20_std
value: 25.02998005000373
- type: nauc_precision_at_3_diff1
value: 7.685752623087433
- type: nauc_precision_at_3_max
value: 4.126629771323765
- type: nauc_precision_at_3_std
value: 5.440817692025366
- type: nauc_precision_at_5_diff1
value: 4.879990376967901
- type: nauc_precision_at_5_max
value: 1.7076492862153407
- type: nauc_precision_at_5_std
value: 6.009634283832547
- type: nauc_recall_at_1000_diff1
value: -28.217119971166543
- type: nauc_recall_at_1000_max
value: 17.425278660689965
- type: nauc_recall_at_1000_std
value: 46.74733043471749
- type: nauc_recall_at_100_diff1
value: 8.738254686625181
- type: nauc_recall_at_100_max
value: 32.8894578304071
- type: nauc_recall_at_100_std
value: 48.425830307603746
- type: nauc_recall_at_10_diff1
value: 7.87336151601764
- type: nauc_recall_at_10_max
value: 9.802552072953997
- type: nauc_recall_at_10_std
value: 13.506647301311201
- type: nauc_recall_at_1_diff1
value: 9.240393281366906
- type: nauc_recall_at_1_max
value: -4.35798405693968
- type: nauc_recall_at_1_std
value: 1.5076565659508505
- type: nauc_recall_at_20_diff1
value: 13.008220519097097
- type: nauc_recall_at_20_max
value: 20.82950701470975
- type: nauc_recall_at_20_std
value: 25.02998005000377
- type: nauc_recall_at_3_diff1
value: 7.685752623087458
- type: nauc_recall_at_3_max
value: 4.126629771323791
- type: nauc_recall_at_3_std
value: 5.440817692025401
- type: nauc_recall_at_5_diff1
value: 4.879990376967856
- type: nauc_recall_at_5_max
value: 1.7076492862153638
- type: nauc_recall_at_5_std
value: 6.009634283832578
- type: ndcg_at_1
value: 21.124000000000002
- type: ndcg_at_10
value: 41.966
- type: ndcg_at_100
value: 47.751
- type: ndcg_at_1000
value: 48.635
- type: ndcg_at_20
value: 45.08
- type: ndcg_at_3
value: 33.505
- type: ndcg_at_5
value: 37.266
- type: precision_at_1
value: 21.124000000000002
- type: precision_at_10
value: 6.643000000000001
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 3.93
- type: precision_at_3
value: 14.296000000000001
- type: precision_at_5
value: 10.413
- type: recall_at_1
value: 21.124000000000002
- type: recall_at_10
value: 66.43
- type: recall_at_100
value: 92.461
- type: recall_at_1000
value: 99.289
- type: recall_at_20
value: 78.592
- type: recall_at_3
value: 42.888
- type: recall_at_5
value: 52.063
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 35.387660145946825
- type: v_measure
value: 35.387660145946825
- type: v_measure_std
value: 14.022525689022785
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 25.26058942964131
- type: v_measure
value: 25.26058942964131
- type: v_measure_std
value: 14.850432186356857
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 54.13950871400633
- type: map
value: 54.13950871400633
- type: mrr
value: 68.87437892978059
- type: nAUC_map_diff1
value: 3.489277672557011
- type: nAUC_map_max
value: 15.848457273691064
- type: nAUC_map_std
value: 5.166813098270773
- type: nAUC_mrr_diff1
value: 4.9924344024669765
- type: nAUC_mrr_max
value: 21.861692980537956
- type: nAUC_mrr_std
value: 8.256966784037171
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 79.11612010879227
- type: cosine_spearman
value: 75.85775256673794
- type: euclidean_pearson
value: 77.46080265077437
- type: euclidean_spearman
value: 75.85775256673794
- type: main_score
value: 75.85775256673794
- type: manhattan_pearson
value: 77.73191375456281
- type: manhattan_spearman
value: 75.98908086034702
- type: pearson
value: 79.11612010879227
- type: spearman
value: 75.85775256673794
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 72.63636363636363
- type: f1
value: 71.69751597573539
- type: f1_weighted
value: 71.69751597573539
- type: main_score
value: 72.63636363636363
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 30.861840536151014
- type: v_measure
value: 30.861840536151014
- type: v_measure_std
value: 0.8096483751274005
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 20.219544420664455
- type: v_measure
value: 20.219544420664455
- type: v_measure_std
value: 0.7431903039116942
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: main_score
value: 31.835
- type: map_at_1
value: 19.939
- type: map_at_10
value: 26.924
- type: map_at_100
value: 28.16
- type: map_at_1000
value: 28.316999999999997
- type: map_at_20
value: 27.554000000000002
- type: map_at_3
value: 24.45
- type: map_at_5
value: 25.751
- type: mrr_at_1
value: 25.894134477825464
- type: mrr_at_10
value: 32.65152031246451
- type: mrr_at_100
value: 33.58362210177363
- type: mrr_at_1000
value: 33.66415578481638
- type: mrr_at_20
value: 33.158616397714056
- type: mrr_at_3
value: 30.51979017644255
- type: mrr_at_5
value: 31.67143538388174
- type: nauc_map_at_1000_diff1
value: 43.61649840733464
- type: nauc_map_at_1000_max
value: 27.361709993841355
- type: nauc_map_at_1000_std
value: -1.47509416166404
- type: nauc_map_at_100_diff1
value: 43.63694784277137
- type: nauc_map_at_100_max
value: 27.3675446795805
- type: nauc_map_at_100_std
value: -1.4918015679743737
- type: nauc_map_at_10_diff1
value: 43.85263484013946
- type: nauc_map_at_10_max
value: 26.810142038619045
- type: nauc_map_at_10_std
value: -1.9884710880957612
- type: nauc_map_at_1_diff1
value: 48.66149039458694
- type: nauc_map_at_1_max
value: 25.719796249226828
- type: nauc_map_at_1_std
value: -3.291830544258096
- type: nauc_map_at_20_diff1
value: 43.70511471916722
- type: nauc_map_at_20_max
value: 27.211922285560092
- type: nauc_map_at_20_std
value: -1.621254133243609
- type: nauc_map_at_3_diff1
value: 45.678378884966854
- type: nauc_map_at_3_max
value: 26.263363796878807
- type: nauc_map_at_3_std
value: -3.067861673919005
- type: nauc_map_at_5_diff1
value: 44.28820868486158
- type: nauc_map_at_5_max
value: 27.02028586800064
- type: nauc_map_at_5_std
value: -2.8993536712942554
- type: nauc_mrr_at_1000_diff1
value: 41.91452307309703
- type: nauc_mrr_at_1000_max
value: 28.25542784321284
- type: nauc_mrr_at_1000_std
value: -1.2881473492995474
- type: nauc_mrr_at_100_diff1
value: 41.887361041816355
- type: nauc_mrr_at_100_max
value: 28.242674898536045
- type: nauc_mrr_at_100_std
value: -1.2962789057617752
- type: nauc_mrr_at_10_diff1
value: 41.839392429152184
- type: nauc_mrr_at_10_max
value: 28.18109937160502
- type: nauc_mrr_at_10_std
value: -1.760338307129395
- type: nauc_mrr_at_1_diff1
value: 46.97337896088234
- type: nauc_mrr_at_1_max
value: 28.47299575870196
- type: nauc_mrr_at_1_std
value: -2.699423724792112
- type: nauc_mrr_at_20_diff1
value: 41.87609128070427
- type: nauc_mrr_at_20_max
value: 28.275298954521837
- type: nauc_mrr_at_20_std
value: -1.3019240483529069
- type: nauc_mrr_at_3_diff1
value: 43.7337496151517
- type: nauc_mrr_at_3_max
value: 27.798267478018285
- type: nauc_mrr_at_3_std
value: -2.840593072947404
- type: nauc_mrr_at_5_diff1
value: 42.334483231228894
- type: nauc_mrr_at_5_max
value: 28.312298246235912
- type: nauc_mrr_at_5_std
value: -2.4627148837425574
- type: nauc_ndcg_at_1000_diff1
value: 41.15727539315947
- type: nauc_ndcg_at_1000_max
value: 28.221291832726013
- type: nauc_ndcg_at_1000_std
value: 2.0023108110987686
- type: nauc_ndcg_at_100_diff1
value: 40.696711368737986
- type: nauc_ndcg_at_100_max
value: 28.3380433133816
- type: nauc_ndcg_at_100_std
value: 1.6747741379499974
- type: nauc_ndcg_at_10_diff1
value: 40.68084799209197
- type: nauc_ndcg_at_10_max
value: 27.001668531808047
- type: nauc_ndcg_at_10_std
value: -0.6698055635076909
- type: nauc_ndcg_at_1_diff1
value: 46.97337896088234
- type: nauc_ndcg_at_1_max
value: 28.47299575870196
- type: nauc_ndcg_at_1_std
value: -2.699423724792112
- type: nauc_ndcg_at_20_diff1
value: 40.66080469225681
- type: nauc_ndcg_at_20_max
value: 27.65886977082646
- type: nauc_ndcg_at_20_std
value: 0.7450066458769301
- type: nauc_ndcg_at_3_diff1
value: 42.76104820392522
- type: nauc_ndcg_at_3_max
value: 26.519613853147632
- type: nauc_ndcg_at_3_std
value: -2.4350130293906034
- type: nauc_ndcg_at_5_diff1
value: 41.019172353488194
- type: nauc_ndcg_at_5_max
value: 27.496046368143357
- type: nauc_ndcg_at_5_std
value: -2.2882580326645177
- type: nauc_precision_at_1000_diff1
value: -14.261675661323125
- type: nauc_precision_at_1000_max
value: -1.183805005826827
- type: nauc_precision_at_1000_std
value: 3.344837871953594
- type: nauc_precision_at_100_diff1
value: 2.705968352361474
- type: nauc_precision_at_100_max
value: 15.123914801051598
- type: nauc_precision_at_100_std
value: 6.622282531987529
- type: nauc_precision_at_10_diff1
value: 21.143497652137974
- type: nauc_precision_at_10_max
value: 22.754667045964673
- type: nauc_precision_at_10_std
value: 2.56769270957959
- type: nauc_precision_at_1_diff1
value: 46.97337896088234
- type: nauc_precision_at_1_max
value: 28.47299575870196
- type: nauc_precision_at_1_std
value: -2.699423724792112
- type: nauc_precision_at_20_diff1
value: 15.750482341955857
- type: nauc_precision_at_20_max
value: 22.860380841938827
- type: nauc_precision_at_20_std
value: 4.22745838192582
- type: nauc_precision_at_3_diff1
value: 35.61809209460161
- type: nauc_precision_at_3_max
value: 27.0006337531976
- type: nauc_precision_at_3_std
value: -1.4556398881692423
- type: nauc_precision_at_5_diff1
value: 28.851808861899496
- type: nauc_precision_at_5_max
value: 27.469054608601784
- type: nauc_precision_at_5_std
value: -1.1421142808937477
- type: nauc_recall_at_1000_diff1
value: 33.27567106545891
- type: nauc_recall_at_1000_max
value: 30.098997951989325
- type: nauc_recall_at_1000_std
value: 37.339251250157766
- type: nauc_recall_at_100_diff1
value: 29.072377336992822
- type: nauc_recall_at_100_max
value: 28.48476566182903
- type: nauc_recall_at_100_std
value: 14.360417936748082
- type: nauc_recall_at_10_diff1
value: 32.83564819819592
- type: nauc_recall_at_10_max
value: 24.465508171060677
- type: nauc_recall_at_10_std
value: 3.332253149508536
- type: nauc_recall_at_1_diff1
value: 48.66149039458694
- type: nauc_recall_at_1_max
value: 25.719796249226828
- type: nauc_recall_at_1_std
value: -3.291830544258096
- type: nauc_recall_at_20_diff1
value: 31.185350107155045
- type: nauc_recall_at_20_max
value: 25.812923152751406
- type: nauc_recall_at_20_std
value: 8.353054109145367
- type: nauc_recall_at_3_diff1
value: 40.27297484569938
- type: nauc_recall_at_3_max
value: 23.81327189620511
- type: nauc_recall_at_3_std
value: -2.526830052534271
- type: nauc_recall_at_5_diff1
value: 34.64896359382995
- type: nauc_recall_at_5_max
value: 25.750218989139317
- type: nauc_recall_at_5_std
value: -1.3789317138918638
- type: ndcg_at_1
value: 25.894000000000002
- type: ndcg_at_10
value: 31.835
- type: ndcg_at_100
value: 37.325
- type: ndcg_at_1000
value: 40.586
- type: ndcg_at_20
value: 33.714
- type: ndcg_at_3
value: 28.143
- type: ndcg_at_5
value: 29.648999999999997
- type: precision_at_1
value: 25.894000000000002
- type: precision_at_10
value: 6.194999999999999
- type: precision_at_100
value: 1.126
- type: precision_at_1000
value: 0.173
- type: precision_at_20
value: 3.7199999999999998
- type: precision_at_3
value: 13.543
- type: precision_at_5
value: 9.757
- type: recall_at_1
value: 19.939
- type: recall_at_10
value: 40.537
- type: recall_at_100
value: 64.717
- type: recall_at_1000
value: 87.01299999999999
- type: recall_at_20
value: 47.677
- type: recall_at_3
value: 29.301
- type: recall_at_5
value: 33.918
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: main_score
value: 25.734
- type: map_at_1
value: 16.601
- type: map_at_10
value: 22.07
- type: map_at_100
value: 22.958000000000002
- type: map_at_1000
value: 23.074
- type: map_at_20
value: 22.52
- type: map_at_3
value: 20.137
- type: map_at_5
value: 21.315
- type: mrr_at_1
value: 20.382165605095544
- type: mrr_at_10
value: 25.95447881912849
- type: mrr_at_100
value: 26.72268332839149
- type: mrr_at_1000
value: 26.79228081014276
- type: mrr_at_20
value: 26.372942687112676
- type: mrr_at_3
value: 24.097664543524406
- type: mrr_at_5
value: 25.269639065817373
- type: nauc_map_at_1000_diff1
value: 39.97979443324452
- type: nauc_map_at_1000_max
value: 13.65503993855689
- type: nauc_map_at_1000_std
value: -2.0265680574493286
- type: nauc_map_at_100_diff1
value: 40.04134376146643
- type: nauc_map_at_100_max
value: 13.602473622919186
- type: nauc_map_at_100_std
value: -2.1531627932652073
- type: nauc_map_at_10_diff1
value: 40.321538712092966
- type: nauc_map_at_10_max
value: 13.5001803982381
- type: nauc_map_at_10_std
value: -2.628320244096416
- type: nauc_map_at_1_diff1
value: 47.528556920568896
- type: nauc_map_at_1_max
value: 15.848152314768068
- type: nauc_map_at_1_std
value: -3.8515029742454763
- type: nauc_map_at_20_diff1
value: 40.22452252482904
- type: nauc_map_at_20_max
value: 13.501820277821633
- type: nauc_map_at_20_std
value: -2.4849480670127835
- type: nauc_map_at_3_diff1
value: 41.68152420395297
- type: nauc_map_at_3_max
value: 13.993359536648425
- type: nauc_map_at_3_std
value: -4.120472655476033
- type: nauc_map_at_5_diff1
value: 40.72541498326932
- type: nauc_map_at_5_max
value: 13.706855573979945
- type: nauc_map_at_5_std
value: -3.168857069165899
- type: nauc_mrr_at_1000_diff1
value: 37.9361528126572
- type: nauc_mrr_at_1000_max
value: 14.435169065764649
- type: nauc_mrr_at_1000_std
value: -0.3672502634006242
- type: nauc_mrr_at_100_diff1
value: 37.94986436229442
- type: nauc_mrr_at_100_max
value: 14.435994989813192
- type: nauc_mrr_at_100_std
value: -0.37576385382293837
- type: nauc_mrr_at_10_diff1
value: 38.11900316449423
- type: nauc_mrr_at_10_max
value: 14.472293540608746
- type: nauc_mrr_at_10_std
value: -0.43716209085613345
- type: nauc_mrr_at_1_diff1
value: 44.21976115137286
- type: nauc_mrr_at_1_max
value: 17.82290497090946
- type: nauc_mrr_at_1_std
value: -1.547820761457578
- type: nauc_mrr_at_20_diff1
value: 38.024147471792524
- type: nauc_mrr_at_20_max
value: 14.385378851779368
- type: nauc_mrr_at_20_std
value: -0.47797312999005215
- type: nauc_mrr_at_3_diff1
value: 39.15186528374059
- type: nauc_mrr_at_3_max
value: 15.21927102759239
- type: nauc_mrr_at_3_std
value: -1.5215890424003806
- type: nauc_mrr_at_5_diff1
value: 38.45626599850357
- type: nauc_mrr_at_5_max
value: 14.640408888284732
- type: nauc_mrr_at_5_std
value: -0.7311075454359176
- type: nauc_ndcg_at_1000_diff1
value: 36.09833573033763
- type: nauc_ndcg_at_1000_max
value: 13.245365815282575
- type: nauc_ndcg_at_1000_std
value: 1.5761746506032988
- type: nauc_ndcg_at_100_diff1
value: 36.904025539005644
- type: nauc_ndcg_at_100_max
value: 12.957957928970645
- type: nauc_ndcg_at_100_std
value: 0.4532239536005292
- type: nauc_ndcg_at_10_diff1
value: 37.32497182133629
- type: nauc_ndcg_at_10_max
value: 12.490853969491074
- type: nauc_ndcg_at_10_std
value: -0.7416415504597471
- type: nauc_ndcg_at_1_diff1
value: 44.21976115137286
- type: nauc_ndcg_at_1_max
value: 17.82290497090946
- type: nauc_ndcg_at_1_std
value: -1.547820761457578
- type: nauc_ndcg_at_20_diff1
value: 37.28170904668032
- type: nauc_ndcg_at_20_max
value: 12.268080858587759
- type: nauc_ndcg_at_20_std
value: -0.7360183931126623
- type: nauc_ndcg_at_3_diff1
value: 39.02888999235542
- type: nauc_ndcg_at_3_max
value: 13.901334459489329
- type: nauc_ndcg_at_3_std
value: -2.7172751935367647
- type: nauc_ndcg_at_5_diff1
value: 38.02752207740974
- type: nauc_ndcg_at_5_max
value: 13.02646174038431
- type: nauc_ndcg_at_5_std
value: -1.609904028585218
- type: nauc_precision_at_1000_diff1
value: -6.66757757004073
- type: nauc_precision_at_1000_max
value: 9.0023204523236
- type: nauc_precision_at_1000_std
value: 23.5060357363243
- type: nauc_precision_at_100_diff1
value: 6.113195112414238
- type: nauc_precision_at_100_max
value: 11.685619926894306
- type: nauc_precision_at_100_std
value: 19.46517809799074
- type: nauc_precision_at_10_diff1
value: 20.39466712905433
- type: nauc_precision_at_10_max
value: 11.42898255449916
- type: nauc_precision_at_10_std
value: 9.716462445452729
- type: nauc_precision_at_1_diff1
value: 44.21976115137286
- type: nauc_precision_at_1_max
value: 17.82290497090946
- type: nauc_precision_at_1_std
value: -1.547820761457578
- type: nauc_precision_at_20_diff1
value: 16.658730057271427
- type: nauc_precision_at_20_max
value: 11.1652114440581
- type: nauc_precision_at_20_std
value: 11.300027272107469
- type: nauc_precision_at_3_diff1
value: 30.28030907617402
- type: nauc_precision_at_3_max
value: 13.794055418422083
- type: nauc_precision_at_3_std
value: 0.6048823642224063
- type: nauc_precision_at_5_diff1
value: 25.663334758638058
- type: nauc_precision_at_5_max
value: 12.249908938864056
- type: nauc_precision_at_5_std
value: 5.0045410071189425
- type: nauc_recall_at_1000_diff1
value: 21.220572448408245
- type: nauc_recall_at_1000_max
value: 9.691420267810058
- type: nauc_recall_at_1000_std
value: 12.85759827330056
- type: nauc_recall_at_100_diff1
value: 28.21527141094479
- type: nauc_recall_at_100_max
value: 9.83831880254868
- type: nauc_recall_at_100_std
value: 5.435149253402134
- type: nauc_recall_at_10_diff1
value: 30.716014201487262
- type: nauc_recall_at_10_max
value: 8.051593782800182
- type: nauc_recall_at_10_std
value: 0.4471610378184442
- type: nauc_recall_at_1_diff1
value: 47.528556920568896
- type: nauc_recall_at_1_max
value: 15.848152314768068
- type: nauc_recall_at_1_std
value: -3.8515029742454763
- type: nauc_recall_at_20_diff1
value: 29.800603042147905
- type: nauc_recall_at_20_max
value: 7.042808403898776
- type: nauc_recall_at_20_std
value: 0.8179034283502986
- type: nauc_recall_at_3_diff1
value: 36.05311584515151
- type: nauc_recall_at_3_max
value: 11.03138015792514
- type: nauc_recall_at_3_std
value: -4.298332543889119
- type: nauc_recall_at_5_diff1
value: 33.34542113435848
- type: nauc_recall_at_5_max
value: 9.391429367517976
- type: nauc_recall_at_5_std
value: -1.5174868347878459
- type: ndcg_at_1
value: 20.382
- type: ndcg_at_10
value: 25.734
- type: ndcg_at_100
value: 29.952
- type: ndcg_at_1000
value: 32.618
- type: ndcg_at_20
value: 27.181
- type: ndcg_at_3
value: 22.445999999999998
- type: ndcg_at_5
value: 24.162
- type: precision_at_1
value: 20.382
- type: precision_at_10
value: 4.662
- type: precision_at_100
value: 0.8580000000000001
- type: precision_at_1000
value: 0.133
- type: precision_at_20
value: 2.828
- type: precision_at_3
value: 10.446
- type: precision_at_5
value: 7.682
- type: recall_at_1
value: 16.601
- type: recall_at_10
value: 32.882
- type: recall_at_100
value: 51.273
- type: recall_at_1000
value: 69.33200000000001
- type: recall_at_20
value: 38.22
- type: recall_at_3
value: 23.54
- type: recall_at_5
value: 28.054000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: main_score
value: 39.235
- type: map_at_1
value: 25.386999999999997
- type: map_at_10
value: 34.183
- type: map_at_100
value: 35.198
- type: map_at_1000
value: 35.292
- type: map_at_20
value: 34.756
- type: map_at_3
value: 31.466
- type: map_at_5
value: 33.037
- type: mrr_at_1
value: 29.404388714733543
- type: mrr_at_10
value: 37.51880877742944
- type: mrr_at_100
value: 38.30457109532953
- type: mrr_at_1000
value: 38.3645245292866
- type: mrr_at_20
value: 37.94776237222878
- type: mrr_at_3
value: 35.15151515151513
- type: mrr_at_5
value: 36.530825496342715
- type: nauc_map_at_1000_diff1
value: 41.249973220934464
- type: nauc_map_at_1000_max
value: 23.416302755877073
- type: nauc_map_at_1000_std
value: -10.207899212437999
- type: nauc_map_at_100_diff1
value: 41.24390045906369
- type: nauc_map_at_100_max
value: 23.393682611799267
- type: nauc_map_at_100_std
value: -10.254556576082482
- type: nauc_map_at_10_diff1
value: 41.382354597936995
- type: nauc_map_at_10_max
value: 23.176782265492363
- type: nauc_map_at_10_std
value: -10.849718292221906
- type: nauc_map_at_1_diff1
value: 45.39686265513208
- type: nauc_map_at_1_max
value: 19.620871905273706
- type: nauc_map_at_1_std
value: -12.904987428165654
- type: nauc_map_at_20_diff1
value: 41.27244082919643
- type: nauc_map_at_20_max
value: 23.302684773349597
- type: nauc_map_at_20_std
value: -10.441842806985154
- type: nauc_map_at_3_diff1
value: 41.8919220244127
- type: nauc_map_at_3_max
value: 22.254220793423723
- type: nauc_map_at_3_std
value: -12.130298439753705
- type: nauc_map_at_5_diff1
value: 41.58025783631085
- type: nauc_map_at_5_max
value: 22.90826213564573
- type: nauc_map_at_5_std
value: -11.165811549758352
- type: nauc_mrr_at_1000_diff1
value: 40.53152598499822
- type: nauc_mrr_at_1000_max
value: 25.11227665851315
- type: nauc_mrr_at_1000_std
value: -8.08741271282522
- type: nauc_mrr_at_100_diff1
value: 40.51963005358264
- type: nauc_mrr_at_100_max
value: 25.120293035347625
- type: nauc_mrr_at_100_std
value: -8.08477757772673
- type: nauc_mrr_at_10_diff1
value: 40.630254919734845
- type: nauc_mrr_at_10_max
value: 25.192263018985
- type: nauc_mrr_at_10_std
value: -8.343786686430308
- type: nauc_mrr_at_1_diff1
value: 45.24802769641752
- type: nauc_mrr_at_1_max
value: 22.81400229887994
- type: nauc_mrr_at_1_std
value: -11.030374885452746
- type: nauc_mrr_at_20_diff1
value: 40.527874579465404
- type: nauc_mrr_at_20_max
value: 25.09785309228408
- type: nauc_mrr_at_20_std
value: -8.178961300984005
- type: nauc_mrr_at_3_diff1
value: 40.9982110047705
- type: nauc_mrr_at_3_max
value: 24.89415486978485
- type: nauc_mrr_at_3_std
value: -9.326777261347539
- type: nauc_mrr_at_5_diff1
value: 40.80630420274428
- type: nauc_mrr_at_5_max
value: 25.27575084878062
- type: nauc_mrr_at_5_std
value: -8.546736722404525
- type: nauc_ndcg_at_1000_diff1
value: 39.53378645935715
- type: nauc_ndcg_at_1000_max
value: 25.526492849521226
- type: nauc_ndcg_at_1000_std
value: -6.007063152931765
- type: nauc_ndcg_at_100_diff1
value: 39.0880907026097
- type: nauc_ndcg_at_100_max
value: 25.27434977919565
- type: nauc_ndcg_at_100_std
value: -6.494059729717049
- type: nauc_ndcg_at_10_diff1
value: 39.75643189392527
- type: nauc_ndcg_at_10_max
value: 24.79335502116443
- type: nauc_ndcg_at_10_std
value: -8.786781322519788
- type: nauc_ndcg_at_1_diff1
value: 45.24802769641752
- type: nauc_ndcg_at_1_max
value: 22.81400229887994
- type: nauc_ndcg_at_1_std
value: -11.030374885452746
- type: nauc_ndcg_at_20_diff1
value: 39.38115636990762
- type: nauc_ndcg_at_20_max
value: 24.830948061340973
- type: nauc_ndcg_at_20_std
value: -7.74514857483731
- type: nauc_ndcg_at_3_diff1
value: 40.597424968913295
- type: nauc_ndcg_at_3_max
value: 23.83761797284813
- type: nauc_ndcg_at_3_std
value: -10.826014984199753
- type: nauc_ndcg_at_5_diff1
value: 40.160243884240955
- type: nauc_ndcg_at_5_max
value: 24.641005184802403
- type: nauc_ndcg_at_5_std
value: -9.394573143721122
- type: nauc_precision_at_1000_diff1
value: -0.26775483855404
- type: nauc_precision_at_1000_max
value: 23.052779599626216
- type: nauc_precision_at_1000_std
value: 24.978867586645737
- type: nauc_precision_at_100_diff1
value: 9.73599417323489
- type: nauc_precision_at_100_max
value: 26.664612833573067
- type: nauc_precision_at_100_std
value: 15.747547424892522
- type: nauc_precision_at_10_diff1
value: 25.384143998683495
- type: nauc_precision_at_10_max
value: 28.77515164969203
- type: nauc_precision_at_10_std
value: 1.334799782027906
- type: nauc_precision_at_1_diff1
value: 45.24802769641752
- type: nauc_precision_at_1_max
value: 22.81400229887994
- type: nauc_precision_at_1_std
value: -11.030374885452746
- type: nauc_precision_at_20_diff1
value: 20.21252517032333
- type: nauc_precision_at_20_max
value: 28.092242647209847
- type: nauc_precision_at_20_std
value: 7.13292725544981
- type: nauc_precision_at_3_diff1
value: 33.31087126292084
- type: nauc_precision_at_3_max
value: 28.144729235595268
- type: nauc_precision_at_3_std
value: -6.680273865904818
- type: nauc_precision_at_5_diff1
value: 29.65876394876068
- type: nauc_precision_at_5_max
value: 29.35126830830009
- type: nauc_precision_at_5_std
value: -1.6373943088766274
- type: nauc_recall_at_1000_diff1
value: 28.93648565815677
- type: nauc_recall_at_1000_max
value: 35.83681303333163
- type: nauc_recall_at_1000_std
value: 33.065249002817446
- type: nauc_recall_at_100_diff1
value: 27.743019102171594
- type: nauc_recall_at_100_max
value: 28.027951033595023
- type: nauc_recall_at_100_std
value: 9.499502949546343
- type: nauc_recall_at_10_diff1
value: 33.975592980890205
- type: nauc_recall_at_10_max
value: 25.654266106207007
- type: nauc_recall_at_10_std
value: -4.889087003341999
- type: nauc_recall_at_1_diff1
value: 45.39686265513208
- type: nauc_recall_at_1_max
value: 19.620871905273706
- type: nauc_recall_at_1_std
value: -12.904987428165654
- type: nauc_recall_at_20_diff1
value: 32.428638046562156
- type: nauc_recall_at_20_max
value: 25.811049662670854
- type: nauc_recall_at_20_std
value: -1.084167664066214
- type: nauc_recall_at_3_diff1
value: 36.80239523147669
- type: nauc_recall_at_3_max
value: 23.70115293826517
- type: nauc_recall_at_3_std
value: -10.179865917816631
- type: nauc_recall_at_5_diff1
value: 35.481273082880385
- type: nauc_recall_at_5_max
value: 25.22699895557444
- type: nauc_recall_at_5_std
value: -6.928154160954265
- type: ndcg_at_1
value: 29.404000000000003
- type: ndcg_at_10
value: 39.235
- type: ndcg_at_100
value: 44.072
- type: ndcg_at_1000
value: 46.272999999999996
- type: ndcg_at_20
value: 40.983000000000004
- type: ndcg_at_3
value: 34.292
- type: ndcg_at_5
value: 36.735
- type: precision_at_1
value: 29.404000000000003
- type: precision_at_10
value: 6.539000000000001
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.125
- type: precision_at_20
value: 3.752
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 10.984
- type: recall_at_1
value: 25.386999999999997
- type: recall_at_10
value: 51.256
- type: recall_at_100
value: 73.53699999999999
- type: recall_at_1000
value: 89.522
- type: recall_at_20
value: 57.687
- type: recall_at_3
value: 37.830999999999996
- type: recall_at_5
value: 43.811
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: main_score
value: 19.197
- type: map_at_1
value: 10.832
- type: map_at_10
value: 16.154
- type: map_at_100
value: 16.863
- type: map_at_1000
value: 16.979
- type: map_at_20
value: 16.494
- type: map_at_3
value: 14.654
- type: map_at_5
value: 15.634
- type: mrr_at_1
value: 11.751412429378531
- type: mrr_at_10
value: 17.286476549188407
- type: mrr_at_100
value: 18.019080515365157
- type: mrr_at_1000
value: 18.122220740371624
- type: mrr_at_20
value: 17.643986643881693
- type: mrr_at_3
value: 15.70621468926553
- type: mrr_at_5
value: 16.774011299435024
- type: nauc_map_at_1000_diff1
value: 37.927063185916786
- type: nauc_map_at_1000_max
value: 14.15651072891371
- type: nauc_map_at_1000_std
value: -8.124962552251457
- type: nauc_map_at_100_diff1
value: 37.93525025821844
- type: nauc_map_at_100_max
value: 14.131523699537288
- type: nauc_map_at_100_std
value: -8.170583771371396
- type: nauc_map_at_10_diff1
value: 38.42813636094302
- type: nauc_map_at_10_max
value: 14.282120499977891
- type: nauc_map_at_10_std
value: -8.577031812934745
- type: nauc_map_at_1_diff1
value: 51.66692699481996
- type: nauc_map_at_1_max
value: 17.664646674047123
- type: nauc_map_at_1_std
value: -11.782621031162968
- type: nauc_map_at_20_diff1
value: 38.17853788871855
- type: nauc_map_at_20_max
value: 14.256213676574742
- type: nauc_map_at_20_std
value: -8.310926163301415
- type: nauc_map_at_3_diff1
value: 40.16070984262913
- type: nauc_map_at_3_max
value: 14.268693118841725
- type: nauc_map_at_3_std
value: -9.133251481752447
- type: nauc_map_at_5_diff1
value: 38.83714248320578
- type: nauc_map_at_5_max
value: 14.547528919229999
- type: nauc_map_at_5_std
value: -8.916871955060776
- type: nauc_mrr_at_1000_diff1
value: 36.5899689047331
- type: nauc_mrr_at_1000_max
value: 15.113884206534985
- type: nauc_mrr_at_1000_std
value: -7.170934224974719
- type: nauc_mrr_at_100_diff1
value: 36.58290352969189
- type: nauc_mrr_at_100_max
value: 15.10461015425463
- type: nauc_mrr_at_100_std
value: -7.193153133255972
- type: nauc_mrr_at_10_diff1
value: 36.886787941126755
- type: nauc_mrr_at_10_max
value: 15.127743773603711
- type: nauc_mrr_at_10_std
value: -7.450354111586159
- type: nauc_mrr_at_1_diff1
value: 50.4303551964735
- type: nauc_mrr_at_1_max
value: 18.974353633454818
- type: nauc_mrr_at_1_std
value: -10.667048661688531
- type: nauc_mrr_at_20_diff1
value: 36.748056497939466
- type: nauc_mrr_at_20_max
value: 15.240859680475241
- type: nauc_mrr_at_20_std
value: -7.288016407850428
- type: nauc_mrr_at_3_diff1
value: 38.37428302171742
- type: nauc_mrr_at_3_max
value: 14.8093219575286
- type: nauc_mrr_at_3_std
value: -7.809230035161326
- type: nauc_mrr_at_5_diff1
value: 37.2144623683964
- type: nauc_mrr_at_5_max
value: 15.28601324524152
- type: nauc_mrr_at_5_std
value: -7.7340060832485
- type: nauc_ndcg_at_1000_diff1
value: 32.12453348510246
- type: nauc_ndcg_at_1000_max
value: 13.157455004954915
- type: nauc_ndcg_at_1000_std
value: -4.92622356811411
- type: nauc_ndcg_at_100_diff1
value: 32.06154877919635
- type: nauc_ndcg_at_100_max
value: 12.373862596941047
- type: nauc_ndcg_at_100_std
value: -5.679273924705311
- type: nauc_ndcg_at_10_diff1
value: 34.0105889334877
- type: nauc_ndcg_at_10_max
value: 13.45850179368671
- type: nauc_ndcg_at_10_std
value: -7.129474197823981
- type: nauc_ndcg_at_1_diff1
value: 50.4303551964735
- type: nauc_ndcg_at_1_max
value: 18.974353633454818
- type: nauc_ndcg_at_1_std
value: -10.667048661688531
- type: nauc_ndcg_at_20_diff1
value: 33.17001669466592
- type: nauc_ndcg_at_20_max
value: 13.32565385671001
- type: nauc_ndcg_at_20_std
value: -6.284897809311489
- type: nauc_ndcg_at_3_diff1
value: 36.583009335894786
- type: nauc_ndcg_at_3_max
value: 13.3100798018976
- type: nauc_ndcg_at_3_std
value: -8.166653842277874
- type: nauc_ndcg_at_5_diff1
value: 34.663883470713714
- type: nauc_ndcg_at_5_max
value: 13.925348847790179
- type: nauc_ndcg_at_5_std
value: -7.8134139319246705
- type: nauc_precision_at_1000_diff1
value: 3.267820129824429
- type: nauc_precision_at_1000_max
value: 13.475739290072998
- type: nauc_precision_at_1000_std
value: 9.817456700342868
- type: nauc_precision_at_100_diff1
value: 14.543473928222502
- type: nauc_precision_at_100_max
value: 9.536842145225432
- type: nauc_precision_at_100_std
value: 2.367980716410962
- type: nauc_precision_at_10_diff1
value: 22.83690357863953
- type: nauc_precision_at_10_max
value: 12.377338528340081
- type: nauc_precision_at_10_std
value: -2.7413618512966442
- type: nauc_precision_at_1_diff1
value: 50.4303551964735
- type: nauc_precision_at_1_max
value: 18.974353633454818
- type: nauc_precision_at_1_std
value: -10.667048661688531
- type: nauc_precision_at_20_diff1
value: 20.379974384537427
- type: nauc_precision_at_20_max
value: 12.277432490519853
- type: nauc_precision_at_20_std
value: -0.023357415290595228
- type: nauc_precision_at_3_diff1
value: 28.00128059605776
- type: nauc_precision_at_3_max
value: 12.115949162806704
- type: nauc_precision_at_3_std
value: -5.111345494119332
- type: nauc_precision_at_5_diff1
value: 23.931333166517064
- type: nauc_precision_at_5_max
value: 13.460490076263444
- type: nauc_precision_at_5_std
value: -4.566369591299022
- type: nauc_recall_at_1000_diff1
value: 13.901980638817474
- type: nauc_recall_at_1000_max
value: 8.169301488452522
- type: nauc_recall_at_1000_std
value: 6.977530327014011
- type: nauc_recall_at_100_diff1
value: 18.54699849728289
- type: nauc_recall_at_100_max
value: 5.40051681338299
- type: nauc_recall_at_100_std
value: -0.2998165893044503
- type: nauc_recall_at_10_diff1
value: 25.158691029447162
- type: nauc_recall_at_10_max
value: 10.698096715728344
- type: nauc_recall_at_10_std
value: -4.90677955177619
- type: nauc_recall_at_1_diff1
value: 51.66692699481996
- type: nauc_recall_at_1_max
value: 17.664646674047123
- type: nauc_recall_at_1_std
value: -11.782621031162968
- type: nauc_recall_at_20_diff1
value: 22.315869507893193
- type: nauc_recall_at_20_max
value: 9.799239845339486
- type: nauc_recall_at_20_std
value: -2.255295176195769
- type: nauc_recall_at_3_diff1
value: 30.21846457670379
- type: nauc_recall_at_3_max
value: 10.958491456074727
- type: nauc_recall_at_3_std
value: -6.746808382770713
- type: nauc_recall_at_5_diff1
value: 26.24302256225738
- type: nauc_recall_at_5_max
value: 11.682268465161725
- type: nauc_recall_at_5_std
value: -6.292007648799524
- type: ndcg_at_1
value: 11.751000000000001
- type: ndcg_at_10
value: 19.197
- type: ndcg_at_100
value: 23.159
- type: ndcg_at_1000
value: 26.453
- type: ndcg_at_20
value: 20.448
- type: ndcg_at_3
value: 16.186
- type: ndcg_at_5
value: 17.936
- type: precision_at_1
value: 11.751000000000001
- type: precision_at_10
value: 3.1189999999999998
- type: precision_at_100
value: 0.54
- type: precision_at_1000
value: 0.086
- type: precision_at_20
value: 1.859
- type: precision_at_3
value: 7.194000000000001
- type: precision_at_5
value: 5.311
- type: recall_at_1
value: 10.832
- type: recall_at_10
value: 27.472
- type: recall_at_100
value: 46.471000000000004
- type: recall_at_1000
value: 71.91199999999999
- type: recall_at_20
value: 32.213
- type: recall_at_3
value: 19.417
- type: recall_at_5
value: 23.577
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: main_score
value: 12.145
- type: map_at_1
value: 6.019
- type: map_at_10
value: 9.584
- type: map_at_100
value: 10.433
- type: map_at_1000
value: 10.562000000000001
- type: map_at_20
value: 10.024
- type: map_at_3
value: 8.351
- type: map_at_5
value: 9.005
- type: mrr_at_1
value: 7.213930348258707
- type: mrr_at_10
value: 11.619827450051332
- type: mrr_at_100
value: 12.469229814971346
- type: mrr_at_1000
value: 12.577286932589695
- type: mrr_at_20
value: 12.072514356821353
- type: mrr_at_3
value: 10.157545605306801
- type: mrr_at_5
value: 10.89759535655058
- type: nauc_map_at_1000_diff1
value: 18.60219400887139
- type: nauc_map_at_1000_max
value: 6.951583595979727
- type: nauc_map_at_1000_std
value: -0.36466683994108184
- type: nauc_map_at_100_diff1
value: 18.660733139389524
- type: nauc_map_at_100_max
value: 6.903072765131549
- type: nauc_map_at_100_std
value: -0.48390217802549257
- type: nauc_map_at_10_diff1
value: 18.573179595835647
- type: nauc_map_at_10_max
value: 6.992666771720911
- type: nauc_map_at_10_std
value: -0.8874423543023089
- type: nauc_map_at_1_diff1
value: 33.90106432523568
- type: nauc_map_at_1_max
value: 9.289205840089235
- type: nauc_map_at_1_std
value: 2.1852128418717705
- type: nauc_map_at_20_diff1
value: 18.334656889783485
- type: nauc_map_at_20_max
value: 6.931684308001437
- type: nauc_map_at_20_std
value: -0.7124186564380448
- type: nauc_map_at_3_diff1
value: 20.32895393313974
- type: nauc_map_at_3_max
value: 5.887419026571198
- type: nauc_map_at_3_std
value: -0.015273865884840596
- type: nauc_map_at_5_diff1
value: 19.15574225963634
- type: nauc_map_at_5_max
value: 6.175933890525402
- type: nauc_map_at_5_std
value: -1.468261999387673
- type: nauc_mrr_at_1000_diff1
value: 18.0560339880594
- type: nauc_mrr_at_1000_max
value: 8.653214727915024
- type: nauc_mrr_at_1000_std
value: 1.6650523107666824
- type: nauc_mrr_at_100_diff1
value: 18.067266124955946
- type: nauc_mrr_at_100_max
value: 8.645444544074266
- type: nauc_mrr_at_100_std
value: 1.605397143432772
- type: nauc_mrr_at_10_diff1
value: 18.227604303918422
- type: nauc_mrr_at_10_max
value: 8.980990643614946
- type: nauc_mrr_at_10_std
value: 1.625956129526598
- type: nauc_mrr_at_1_diff1
value: 33.145174271418576
- type: nauc_mrr_at_1_max
value: 10.674348159869123
- type: nauc_mrr_at_1_std
value: 2.5718912675260843
- type: nauc_mrr_at_20_diff1
value: 17.85361170315467
- type: nauc_mrr_at_20_max
value: 8.689966423383293
- type: nauc_mrr_at_20_std
value: 1.4845343622374683
- type: nauc_mrr_at_3_diff1
value: 19.72873972100882
- type: nauc_mrr_at_3_max
value: 7.818757201820606
- type: nauc_mrr_at_3_std
value: 2.317801166782217
- type: nauc_mrr_at_5_diff1
value: 18.70515159747826
- type: nauc_mrr_at_5_max
value: 7.8553636278171055
- type: nauc_mrr_at_5_std
value: 0.8593300223901442
- type: nauc_ndcg_at_1000_diff1
value: 14.777764985527059
- type: nauc_ndcg_at_1000_max
value: 8.001133085293265
- type: nauc_ndcg_at_1000_std
value: 2.715094827482056
- type: nauc_ndcg_at_100_diff1
value: 15.873494520058037
- type: nauc_ndcg_at_100_max
value: 7.5190091115119
- type: nauc_ndcg_at_100_std
value: 0.7430533500967327
- type: nauc_ndcg_at_10_diff1
value: 14.950829327092022
- type: nauc_ndcg_at_10_max
value: 7.999425322307154
- type: nauc_ndcg_at_10_std
value: -0.5911692617165382
- type: nauc_ndcg_at_1_diff1
value: 33.145174271418576
- type: nauc_ndcg_at_1_max
value: 10.674348159869123
- type: nauc_ndcg_at_1_std
value: 2.5718912675260843
- type: nauc_ndcg_at_20_diff1
value: 14.28695753335748
- type: nauc_ndcg_at_20_max
value: 7.460341211112809
- type: nauc_ndcg_at_20_std
value: -0.2734671370134216
- type: nauc_ndcg_at_3_diff1
value: 17.243393543205006
- type: nauc_ndcg_at_3_max
value: 6.003682896861271
- type: nauc_ndcg_at_3_std
value: 0.3923628664952013
- type: nauc_ndcg_at_5_diff1
value: 15.841455870049076
- type: nauc_ndcg_at_5_max
value: 6.163583363661528
- type: nauc_ndcg_at_5_std
value: -1.9411356710983478
- type: nauc_precision_at_1000_diff1
value: -3.399817676017686
- type: nauc_precision_at_1000_max
value: 5.575723322824422
- type: nauc_precision_at_1000_std
value: 5.63779109914318
- type: nauc_precision_at_100_diff1
value: 6.1555220193892435
- type: nauc_precision_at_100_max
value: 6.7977343501791045
- type: nauc_precision_at_100_std
value: 2.026960062764128
- type: nauc_precision_at_10_diff1
value: 5.864713737249161
- type: nauc_precision_at_10_max
value: 10.987539143688663
- type: nauc_precision_at_10_std
value: -0.12419185225065871
- type: nauc_precision_at_1_diff1
value: 33.145174271418576
- type: nauc_precision_at_1_max
value: 10.674348159869123
- type: nauc_precision_at_1_std
value: 2.5718912675260843
- type: nauc_precision_at_20_diff1
value: 4.994637980783556
- type: nauc_precision_at_20_max
value: 7.522690866727933
- type: nauc_precision_at_20_std
value: 0.027674551460471312
- type: nauc_precision_at_3_diff1
value: 8.451342681964578
- type: nauc_precision_at_3_max
value: 5.343253356927528
- type: nauc_precision_at_3_std
value: 1.6495845441147832
- type: nauc_precision_at_5_diff1
value: 6.193033041556517
- type: nauc_precision_at_5_max
value: 5.77635145338238
- type: nauc_precision_at_5_std
value: -3.421797113444559
- type: nauc_recall_at_1000_diff1
value: 7.437110169863727
- type: nauc_recall_at_1000_max
value: 9.607314782406986
- type: nauc_recall_at_1000_std
value: 13.320498460741362
- type: nauc_recall_at_100_diff1
value: 13.309966057961834
- type: nauc_recall_at_100_max
value: 7.748170239579637
- type: nauc_recall_at_100_std
value: 2.6798857378517864
- type: nauc_recall_at_10_diff1
value: 8.674278695378167
- type: nauc_recall_at_10_max
value: 8.969918415623756
- type: nauc_recall_at_10_std
value: -1.4597400700986853
- type: nauc_recall_at_1_diff1
value: 33.90106432523568
- type: nauc_recall_at_1_max
value: 9.289205840089235
- type: nauc_recall_at_1_std
value: 2.1852128418717705
- type: nauc_recall_at_20_diff1
value: 7.663555921211413
- type: nauc_recall_at_20_max
value: 7.420494129425241
- type: nauc_recall_at_20_std
value: -0.39971980929980877
- type: nauc_recall_at_3_diff1
value: 10.784631081908223
- type: nauc_recall_at_3_max
value: 3.815625872455824
- type: nauc_recall_at_3_std
value: -1.1614434404018152
- type: nauc_recall_at_5_diff1
value: 9.60638979119831
- type: nauc_recall_at_5_max
value: 5.1710882220553405
- type: nauc_recall_at_5_std
value: -4.572280393094789
- type: ndcg_at_1
value: 7.2139999999999995
- type: ndcg_at_10
value: 12.145
- type: ndcg_at_100
value: 16.672
- type: ndcg_at_1000
value: 20.342
- type: ndcg_at_20
value: 13.745
- type: ndcg_at_3
value: 9.607000000000001
- type: ndcg_at_5
value: 10.712000000000002
- type: precision_at_1
value: 7.2139999999999995
- type: precision_at_10
value: 2.338
- type: precision_at_100
value: 0.5459999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 1.6039999999999999
- type: precision_at_3
value: 4.726
- type: precision_at_5
value: 3.5319999999999996
- type: recall_at_1
value: 6.019
- type: recall_at_10
value: 18.102999999999998
- type: recall_at_100
value: 38.482
- type: recall_at_1000
value: 65.436
- type: recall_at_20
value: 23.952
- type: recall_at_3
value: 11.178
- type: recall_at_5
value: 13.877
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: main_score
value: 26.667999999999996
- type: map_at_1
value: 16.822
- type: map_at_10
value: 22.476
- type: map_at_100
value: 23.69
- type: map_at_1000
value: 23.827
- type: map_at_20
value: 23.084
- type: map_at_3
value: 20.441000000000003
- type: map_at_5
value: 21.512
- type: mrr_at_1
value: 20.78922040423484
- type: mrr_at_10
value: 26.67445804115679
- type: mrr_at_100
value: 27.67534998291947
- type: mrr_at_1000
value: 27.752906060167692
- type: mrr_at_20
value: 27.19875968774574
- type: mrr_at_3
value: 24.4947064485082
- type: mrr_at_5
value: 25.630413859480278
- type: nauc_map_at_1000_diff1
value: 40.40492447320535
- type: nauc_map_at_1000_max
value: 28.548119831633194
- type: nauc_map_at_1000_std
value: -0.22424233207141148
- type: nauc_map_at_100_diff1
value: 40.39875847865982
- type: nauc_map_at_100_max
value: 28.500575725413096
- type: nauc_map_at_100_std
value: -0.2779979908842256
- type: nauc_map_at_10_diff1
value: 40.942304749094085
- type: nauc_map_at_10_max
value: 28.429772938475008
- type: nauc_map_at_10_std
value: -0.8049874864329988
- type: nauc_map_at_1_diff1
value: 47.17822553627135
- type: nauc_map_at_1_max
value: 31.206514215995206
- type: nauc_map_at_1_std
value: -1.8984121963184788
- type: nauc_map_at_20_diff1
value: 40.4346381000311
- type: nauc_map_at_20_max
value: 28.458128761837536
- type: nauc_map_at_20_std
value: -0.7321703207226834
- type: nauc_map_at_3_diff1
value: 42.2424427066743
- type: nauc_map_at_3_max
value: 28.16537428952111
- type: nauc_map_at_3_std
value: -2.298671243793284
- type: nauc_map_at_5_diff1
value: 41.32690925538059
- type: nauc_map_at_5_max
value: 28.53162210264393
- type: nauc_map_at_5_std
value: -1.1738320079845177
- type: nauc_mrr_at_1000_diff1
value: 37.69693278594645
- type: nauc_mrr_at_1000_max
value: 29.49690742209793
- type: nauc_mrr_at_1000_std
value: 3.1815473802020544
- type: nauc_mrr_at_100_diff1
value: 37.65946389835227
- type: nauc_mrr_at_100_max
value: 29.479438074437127
- type: nauc_mrr_at_100_std
value: 3.166552364873761
- type: nauc_mrr_at_10_diff1
value: 38.06473613801605
- type: nauc_mrr_at_10_max
value: 29.79312016758447
- type: nauc_mrr_at_10_std
value: 3.111988711521923
- type: nauc_mrr_at_1_diff1
value: 43.69553072839024
- type: nauc_mrr_at_1_max
value: 32.142344513289025
- type: nauc_mrr_at_1_std
value: 2.696048057380709
- type: nauc_mrr_at_20_diff1
value: 37.626141249327574
- type: nauc_mrr_at_20_max
value: 29.559923833552347
- type: nauc_mrr_at_20_std
value: 2.9860721770618697
- type: nauc_mrr_at_3_diff1
value: 39.324715416924974
- type: nauc_mrr_at_3_max
value: 29.651196356282618
- type: nauc_mrr_at_3_std
value: 1.9583884507428824
- type: nauc_mrr_at_5_diff1
value: 38.36691352781637
- type: nauc_mrr_at_5_max
value: 29.939763561026002
- type: nauc_mrr_at_5_std
value: 2.7317703526814214
- type: nauc_ndcg_at_1000_diff1
value: 36.523136783112406
- type: nauc_ndcg_at_1000_max
value: 28.684387654497584
- type: nauc_ndcg_at_1000_std
value: 4.732051883634089
- type: nauc_ndcg_at_100_diff1
value: 36.16154861613736
- type: nauc_ndcg_at_100_max
value: 27.921202679602143
- type: nauc_ndcg_at_100_std
value: 3.560040019944456
- type: nauc_ndcg_at_10_diff1
value: 37.774474422977896
- type: nauc_ndcg_at_10_max
value: 27.68147817987237
- type: nauc_ndcg_at_10_std
value: 0.8327502237036594
- type: nauc_ndcg_at_1_diff1
value: 43.69553072839024
- type: nauc_ndcg_at_1_max
value: 32.142344513289025
- type: nauc_ndcg_at_1_std
value: 2.696048057380709
- type: nauc_ndcg_at_20_diff1
value: 36.163233644690266
- type: nauc_ndcg_at_20_max
value: 27.4164968525345
- type: nauc_ndcg_at_20_std
value: 0.8376631121502218
- type: nauc_ndcg_at_3_diff1
value: 39.707715661307105
- type: nauc_ndcg_at_3_max
value: 28.324727845444997
- type: nauc_ndcg_at_3_std
value: -0.7238153399588456
- type: nauc_ndcg_at_5_diff1
value: 38.42323115018405
- type: nauc_ndcg_at_5_max
value: 28.520234702176587
- type: nauc_ndcg_at_5_std
value: 0.4337143091381524
- type: nauc_precision_at_1000_diff1
value: -1.7237517846851018
- type: nauc_precision_at_1000_max
value: 16.20499296488572
- type: nauc_precision_at_1000_std
value: 20.16360817424688
- type: nauc_precision_at_100_diff1
value: 7.455105305668386
- type: nauc_precision_at_100_max
value: 23.35672119353681
- type: nauc_precision_at_100_std
value: 18.66911905196039
- type: nauc_precision_at_10_diff1
value: 23.28265657395181
- type: nauc_precision_at_10_max
value: 27.533659469131948
- type: nauc_precision_at_10_std
value: 9.661356716727099
- type: nauc_precision_at_1_diff1
value: 43.69553072839024
- type: nauc_precision_at_1_max
value: 32.142344513289025
- type: nauc_precision_at_1_std
value: 2.696048057380709
- type: nauc_precision_at_20_diff1
value: 15.588844976640317
- type: nauc_precision_at_20_max
value: 24.89373446940838
- type: nauc_precision_at_20_std
value: 9.462736793529547
- type: nauc_precision_at_3_diff1
value: 31.24543977571387
- type: nauc_precision_at_3_max
value: 27.88457380895888
- type: nauc_precision_at_3_std
value: 3.0400582769598334
- type: nauc_precision_at_5_diff1
value: 27.621476771588156
- type: nauc_precision_at_5_max
value: 29.344696084898647
- type: nauc_precision_at_5_std
value: 6.279675749763937
- type: nauc_recall_at_1000_diff1
value: 20.19996493542523
- type: nauc_recall_at_1000_max
value: 24.65244498292903
- type: nauc_recall_at_1000_std
value: 35.312310075738125
- type: nauc_recall_at_100_diff1
value: 22.904431187357847
- type: nauc_recall_at_100_max
value: 21.00955732817796
- type: nauc_recall_at_100_std
value: 13.938151070174573
- type: nauc_recall_at_10_diff1
value: 30.03923096618402
- type: nauc_recall_at_10_max
value: 22.353534397229048
- type: nauc_recall_at_10_std
value: 1.2207088824681231
- type: nauc_recall_at_1_diff1
value: 47.17822553627135
- type: nauc_recall_at_1_max
value: 31.206514215995206
- type: nauc_recall_at_1_std
value: -1.8984121963184788
- type: nauc_recall_at_20_diff1
value: 24.682826207248283
- type: nauc_recall_at_20_max
value: 20.777119838220408
- type: nauc_recall_at_20_std
value: 1.2286788398315465
- type: nauc_recall_at_3_diff1
value: 35.715604782377035
- type: nauc_recall_at_3_max
value: 23.7633639937056
- type: nauc_recall_at_3_std
value: -2.868937897653619
- type: nauc_recall_at_5_diff1
value: 32.21252827575707
- type: nauc_recall_at_5_max
value: 24.799142864683375
- type: nauc_recall_at_5_std
value: 0.36296684299374204
- type: ndcg_at_1
value: 20.788999999999998
- type: ndcg_at_10
value: 26.667999999999996
- type: ndcg_at_100
value: 32.565
- type: ndcg_at_1000
value: 35.634
- type: ndcg_at_20
value: 28.642
- type: ndcg_at_3
value: 22.942
- type: ndcg_at_5
value: 24.514
- type: precision_at_1
value: 20.788999999999998
- type: precision_at_10
value: 4.947
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_20
value: 3.104
- type: precision_at_3
value: 10.748000000000001
- type: precision_at_5
value: 7.68
- type: recall_at_1
value: 16.822
- type: recall_at_10
value: 35.237
- type: recall_at_100
value: 61.219
- type: recall_at_1000
value: 82.499
- type: recall_at_20
value: 42.230000000000004
- type: recall_at_3
value: 24.524
- type: recall_at_5
value: 28.787000000000003
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: main_score
value: 21.66
- type: map_at_1
value: 12.416
- type: map_at_10
value: 17.684
- type: map_at_100
value: 18.851000000000003
- type: map_at_1000
value: 18.991
- type: map_at_20
value: 18.360000000000003
- type: map_at_3
value: 15.770999999999999
- type: map_at_5
value: 16.606
- type: mrr_at_1
value: 15.068493150684931
- type: mrr_at_10
value: 21.28823294919185
- type: mrr_at_100
value: 22.306240026063588
- type: mrr_at_1000
value: 22.395578374917164
- type: mrr_at_20
value: 21.90701850599165
- type: mrr_at_3
value: 19.273211567732123
- type: mrr_at_5
value: 20.397640791476412
- type: nauc_map_at_1000_diff1
value: 32.04680475392268
- type: nauc_map_at_1000_max
value: 20.9527363509733
- type: nauc_map_at_1000_std
value: 1.9775389393996066
- type: nauc_map_at_100_diff1
value: 32.05659071752874
- type: nauc_map_at_100_max
value: 20.937669829415213
- type: nauc_map_at_100_std
value: 1.8872130027911487
- type: nauc_map_at_10_diff1
value: 32.40493239661906
- type: nauc_map_at_10_max
value: 20.24841030282171
- type: nauc_map_at_10_std
value: 0.8873591420958411
- type: nauc_map_at_1_diff1
value: 39.50866679123135
- type: nauc_map_at_1_max
value: 21.067083493139833
- type: nauc_map_at_1_std
value: -1.255629309903365
- type: nauc_map_at_20_diff1
value: 32.06523680001786
- type: nauc_map_at_20_max
value: 20.482809699946856
- type: nauc_map_at_20_std
value: 1.2900775457613989
- type: nauc_map_at_3_diff1
value: 33.51328659054749
- type: nauc_map_at_3_max
value: 19.351150884357097
- type: nauc_map_at_3_std
value: -0.9449293271546024
- type: nauc_map_at_5_diff1
value: 32.672807388132
- type: nauc_map_at_5_max
value: 19.888696407961916
- type: nauc_map_at_5_std
value: -0.21370229639305732
- type: nauc_mrr_at_1000_diff1
value: 29.4702965330427
- type: nauc_mrr_at_1000_max
value: 21.5485190959632
- type: nauc_mrr_at_1000_std
value: 2.9474086643706716
- type: nauc_mrr_at_100_diff1
value: 29.444301031842237
- type: nauc_mrr_at_100_max
value: 21.545652672940818
- type: nauc_mrr_at_100_std
value: 2.930083417192537
- type: nauc_mrr_at_10_diff1
value: 29.839809988865028
- type: nauc_mrr_at_10_max
value: 21.285084047773285
- type: nauc_mrr_at_10_std
value: 2.3023735099948794
- type: nauc_mrr_at_1_diff1
value: 38.253685943964285
- type: nauc_mrr_at_1_max
value: 23.506493457282993
- type: nauc_mrr_at_1_std
value: 0.36623457899262024
- type: nauc_mrr_at_20_diff1
value: 29.359787332306013
- type: nauc_mrr_at_20_max
value: 21.246732134190733
- type: nauc_mrr_at_20_std
value: 2.6115784611487087
- type: nauc_mrr_at_3_diff1
value: 31.490392724228837
- type: nauc_mrr_at_3_max
value: 21.643605643490904
- type: nauc_mrr_at_3_std
value: 1.6756866672672965
- type: nauc_mrr_at_5_diff1
value: 30.18536933081793
- type: nauc_mrr_at_5_max
value: 21.27264373907216
- type: nauc_mrr_at_5_std
value: 1.7079689552978534
- type: nauc_ndcg_at_1000_diff1
value: 28.11169834333845
- type: nauc_ndcg_at_1000_max
value: 22.65134504760621
- type: nauc_ndcg_at_1000_std
value: 8.353986044564932
- type: nauc_ndcg_at_100_diff1
value: 28.265985165496417
- type: nauc_ndcg_at_100_max
value: 22.530347672551887
- type: nauc_ndcg_at_100_std
value: 6.968755339521627
- type: nauc_ndcg_at_10_diff1
value: 29.088878880551906
- type: nauc_ndcg_at_10_max
value: 19.918818478137702
- type: nauc_ndcg_at_10_std
value: 2.5519795248451795
- type: nauc_ndcg_at_1_diff1
value: 38.253685943964285
- type: nauc_ndcg_at_1_max
value: 23.506493457282993
- type: nauc_ndcg_at_1_std
value: 0.36623457899262024
- type: nauc_ndcg_at_20_diff1
value: 27.910656458566045
- type: nauc_ndcg_at_20_max
value: 20.295061759944723
- type: nauc_ndcg_at_20_std
value: 3.6145835770906833
- type: nauc_ndcg_at_3_diff1
value: 31.233680318242634
- type: nauc_ndcg_at_3_max
value: 19.494683132285033
- type: nauc_ndcg_at_3_std
value: 0.04355647255533374
- type: nauc_ndcg_at_5_diff1
value: 29.60761336088322
- type: nauc_ndcg_at_5_max
value: 19.80719438136175
- type: nauc_ndcg_at_5_std
value: 0.6195875169583498
- type: nauc_precision_at_1000_diff1
value: -4.9635863591586284
- type: nauc_precision_at_1000_max
value: 10.205880001940644
- type: nauc_precision_at_1000_std
value: 13.475741604004421
- type: nauc_precision_at_100_diff1
value: 7.633273326571685
- type: nauc_precision_at_100_max
value: 23.151284304137622
- type: nauc_precision_at_100_std
value: 20.405156194796863
- type: nauc_precision_at_10_diff1
value: 18.705937577794554
- type: nauc_precision_at_10_max
value: 20.628035226019335
- type: nauc_precision_at_10_std
value: 7.041902045527893
- type: nauc_precision_at_1_diff1
value: 38.253685943964285
- type: nauc_precision_at_1_max
value: 23.506493457282993
- type: nauc_precision_at_1_std
value: 0.36623457899262024
- type: nauc_precision_at_20_diff1
value: 14.129163643470525
- type: nauc_precision_at_20_max
value: 20.39744876825584
- type: nauc_precision_at_20_std
value: 10.808780160453079
- type: nauc_precision_at_3_diff1
value: 24.81724694529244
- type: nauc_precision_at_3_max
value: 19.750250129235862
- type: nauc_precision_at_3_std
value: 1.6383497722612925
- type: nauc_precision_at_5_diff1
value: 20.559816479129896
- type: nauc_precision_at_5_max
value: 20.737938153703908
- type: nauc_precision_at_5_std
value: 2.9329054609944767
- type: nauc_recall_at_1000_diff1
value: 14.657477263404504
- type: nauc_recall_at_1000_max
value: 27.29789317523507
- type: nauc_recall_at_1000_std
value: 41.54560242921126
- type: nauc_recall_at_100_diff1
value: 19.668816678808028
- type: nauc_recall_at_100_max
value: 24.546392696829855
- type: nauc_recall_at_100_std
value: 20.045457113413388
- type: nauc_recall_at_10_diff1
value: 22.57592036080691
- type: nauc_recall_at_10_max
value: 17.30186041967476
- type: nauc_recall_at_10_std
value: 5.75949108824036
- type: nauc_recall_at_1_diff1
value: 39.50866679123135
- type: nauc_recall_at_1_max
value: 21.067083493139833
- type: nauc_recall_at_1_std
value: -1.255629309903365
- type: nauc_recall_at_20_diff1
value: 18.597441888297915
- type: nauc_recall_at_20_max
value: 17.76783323985467
- type: nauc_recall_at_20_std
value: 7.756313900025849
- type: nauc_recall_at_3_diff1
value: 27.928359626631092
- type: nauc_recall_at_3_max
value: 16.336637037641772
- type: nauc_recall_at_3_std
value: -1.3417417785554366
- type: nauc_recall_at_5_diff1
value: 24.22251676423838
- type: nauc_recall_at_5_max
value: 16.857422692031594
- type: nauc_recall_at_5_std
value: 0.6185629064463674
- type: ndcg_at_1
value: 15.068000000000001
- type: ndcg_at_10
value: 21.66
- type: ndcg_at_100
value: 27.245
- type: ndcg_at_1000
value: 30.591
- type: ndcg_at_20
value: 23.955000000000002
- type: ndcg_at_3
value: 17.968999999999998
- type: ndcg_at_5
value: 19.352
- type: precision_at_1
value: 15.068000000000001
- type: precision_at_10
value: 4.326
- type: precision_at_100
value: 0.855
- type: precision_at_1000
value: 0.132
- type: precision_at_20
value: 2.8369999999999997
- type: precision_at_3
value: 8.713999999999999
- type: precision_at_5
value: 6.3469999999999995
- type: recall_at_1
value: 12.416
- type: recall_at_10
value: 30.008000000000003
- type: recall_at_100
value: 54.498999999999995
- type: recall_at_1000
value: 78.32000000000001
- type: recall_at_20
value: 38.378
- type: recall_at_3
value: 19.79
- type: recall_at_5
value: 23.376
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 22.302333333333333
- type: ndcg_at_10
value: 22.302333333333333
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: main_score
value: 17.253
- type: map_at_1
value: 9.722999999999999
- type: map_at_10
value: 14.280999999999999
- type: map_at_100
value: 15.065000000000001
- type: map_at_1000
value: 15.154
- type: map_at_20
value: 14.704999999999998
- type: map_at_3
value: 13.004
- type: map_at_5
value: 13.626
- type: mrr_at_1
value: 11.809815950920246
- type: mrr_at_10
value: 16.383959002824028
- type: mrr_at_100
value: 17.188709691814985
- type: mrr_at_1000
value: 17.269435610183017
- type: mrr_at_20
value: 16.836972625425393
- type: mrr_at_3
value: 15.081799591002035
- type: mrr_at_5
value: 15.710633946830258
- type: nauc_map_at_1000_diff1
value: 28.431623275634156
- type: nauc_map_at_1000_max
value: 14.476316695164403
- type: nauc_map_at_1000_std
value: 4.607998508591043
- type: nauc_map_at_100_diff1
value: 28.42367177875125
- type: nauc_map_at_100_max
value: 14.394653506060012
- type: nauc_map_at_100_std
value: 4.567472357591712
- type: nauc_map_at_10_diff1
value: 28.60653023312716
- type: nauc_map_at_10_max
value: 14.78157644547682
- type: nauc_map_at_10_std
value: 3.94994519901673
- type: nauc_map_at_1_diff1
value: 34.36968432094878
- type: nauc_map_at_1_max
value: 17.456572010137457
- type: nauc_map_at_1_std
value: 4.2640515305539415
- type: nauc_map_at_20_diff1
value: 28.510596490501573
- type: nauc_map_at_20_max
value: 14.318541992037401
- type: nauc_map_at_20_std
value: 4.254075392620963
- type: nauc_map_at_3_diff1
value: 30.539716169861936
- type: nauc_map_at_3_max
value: 16.14471431902583
- type: nauc_map_at_3_std
value: 4.973502209268125
- type: nauc_map_at_5_diff1
value: 29.261684655915225
- type: nauc_map_at_5_max
value: 15.372748605327446
- type: nauc_map_at_5_std
value: 4.39285622535654
- type: nauc_mrr_at_1000_diff1
value: 28.972718024301447
- type: nauc_mrr_at_1000_max
value: 17.826835397341046
- type: nauc_mrr_at_1000_std
value: 6.917284034347911
- type: nauc_mrr_at_100_diff1
value: 28.945997371755087
- type: nauc_mrr_at_100_max
value: 17.739278412823893
- type: nauc_mrr_at_100_std
value: 6.899424135908487
- type: nauc_mrr_at_10_diff1
value: 29.06935519309891
- type: nauc_mrr_at_10_max
value: 18.21083753088906
- type: nauc_mrr_at_10_std
value: 6.518493253737144
- type: nauc_mrr_at_1_diff1
value: 35.63041619844435
- type: nauc_mrr_at_1_max
value: 22.830306049699338
- type: nauc_mrr_at_1_std
value: 7.826683917417351
- type: nauc_mrr_at_20_diff1
value: 29.016004511022537
- type: nauc_mrr_at_20_max
value: 17.788437345787926
- type: nauc_mrr_at_20_std
value: 6.652263770077456
- type: nauc_mrr_at_3_diff1
value: 30.644333070723466
- type: nauc_mrr_at_3_max
value: 19.667632613725225
- type: nauc_mrr_at_3_std
value: 7.743380165559918
- type: nauc_mrr_at_5_diff1
value: 29.829376205828805
- type: nauc_mrr_at_5_max
value: 18.722595091544253
- type: nauc_mrr_at_5_std
value: 6.818524829545593
- type: nauc_ndcg_at_1000_diff1
value: 25.62248172657835
- type: nauc_ndcg_at_1000_max
value: 14.223326419511073
- type: nauc_ndcg_at_1000_std
value: 7.495752604082028
- type: nauc_ndcg_at_100_diff1
value: 25.499428653265642
- type: nauc_ndcg_at_100_max
value: 12.585064293899102
- type: nauc_ndcg_at_100_std
value: 6.664889384341954
- type: nauc_ndcg_at_10_diff1
value: 25.74972755098383
- type: nauc_ndcg_at_10_max
value: 13.793434874824303
- type: nauc_ndcg_at_10_std
value: 3.883648047462527
- type: nauc_ndcg_at_1_diff1
value: 35.63041619844435
- type: nauc_ndcg_at_1_max
value: 22.830306049699338
- type: nauc_ndcg_at_1_std
value: 7.826683917417351
- type: nauc_ndcg_at_20_diff1
value: 25.334745687494443
- type: nauc_ndcg_at_20_max
value: 12.305607906859144
- type: nauc_ndcg_at_20_std
value: 4.7413728340444505
- type: nauc_ndcg_at_3_diff1
value: 29.45395763143249
- type: nauc_ndcg_at_3_max
value: 16.23690234046979
- type: nauc_ndcg_at_3_std
value: 6.142105291678576
- type: nauc_ndcg_at_5_diff1
value: 27.444736442905455
- type: nauc_ndcg_at_5_max
value: 14.93362615759676
- type: nauc_ndcg_at_5_std
value: 4.7342440148611225
- type: nauc_precision_at_1000_diff1
value: 16.80575206659899
- type: nauc_precision_at_1000_max
value: 17.66226703408546
- type: nauc_precision_at_1000_std
value: 18.77422949877631
- type: nauc_precision_at_100_diff1
value: 21.105287938477233
- type: nauc_precision_at_100_max
value: 13.591179380636214
- type: nauc_precision_at_100_std
value: 16.55840962012843
- type: nauc_precision_at_10_diff1
value: 21.469758913525254
- type: nauc_precision_at_10_max
value: 15.320780706573464
- type: nauc_precision_at_10_std
value: 6.351289997170259
- type: nauc_precision_at_1_diff1
value: 35.63041619844435
- type: nauc_precision_at_1_max
value: 22.830306049699338
- type: nauc_precision_at_1_std
value: 7.826683917417351
- type: nauc_precision_at_20_diff1
value: 20.438996654370953
- type: nauc_precision_at_20_max
value: 11.895395539109575
- type: nauc_precision_at_20_std
value: 9.227372989467945
- type: nauc_precision_at_3_diff1
value: 27.958385745280534
- type: nauc_precision_at_3_max
value: 18.76663358991842
- type: nauc_precision_at_3_std
value: 8.804799926813658
- type: nauc_precision_at_5_diff1
value: 25.20756412916346
- type: nauc_precision_at_5_max
value: 17.16752690039525
- type: nauc_precision_at_5_std
value: 7.822524248176865
- type: nauc_recall_at_1000_diff1
value: 17.093227818066353
- type: nauc_recall_at_1000_max
value: 12.628515233697735
- type: nauc_recall_at_1000_std
value: 16.519924218447994
- type: nauc_recall_at_100_diff1
value: 18.19732935930814
- type: nauc_recall_at_100_max
value: 4.740051109026774
- type: nauc_recall_at_100_std
value: 10.729043783837753
- type: nauc_recall_at_10_diff1
value: 17.84235497242283
- type: nauc_recall_at_10_max
value: 7.9110522988146155
- type: nauc_recall_at_10_std
value: 1.147900198002905
- type: nauc_recall_at_1_diff1
value: 34.36968432094878
- type: nauc_recall_at_1_max
value: 17.456572010137457
- type: nauc_recall_at_1_std
value: 4.2640515305539415
- type: nauc_recall_at_20_diff1
value: 16.692476991368853
- type: nauc_recall_at_20_max
value: 3.809776817661501
- type: nauc_recall_at_20_std
value: 3.6575551737685954
- type: nauc_recall_at_3_diff1
value: 25.110591985459862
- type: nauc_recall_at_3_max
value: 13.681824792451245
- type: nauc_recall_at_3_std
value: 5.806771643452482
- type: nauc_recall_at_5_diff1
value: 21.0191985797923
- type: nauc_recall_at_5_max
value: 10.837381063643834
- type: nauc_recall_at_5_std
value: 3.228418252689027
- type: ndcg_at_1
value: 11.81
- type: ndcg_at_10
value: 17.253
- type: ndcg_at_100
value: 21.404
- type: ndcg_at_1000
value: 24.09
- type: ndcg_at_20
value: 18.801000000000002
- type: ndcg_at_3
value: 14.716999999999999
- type: ndcg_at_5
value: 15.706000000000001
- type: precision_at_1
value: 11.81
- type: precision_at_10
value: 2.9749999999999996
- type: precision_at_100
value: 0.543
- type: precision_at_1000
value: 0.084
- type: precision_at_20
value: 1.848
- type: precision_at_3
value: 6.902
- type: precision_at_5
value: 4.816
- type: recall_at_1
value: 9.722999999999999
- type: recall_at_10
value: 24.569
- type: recall_at_100
value: 43.997
- type: recall_at_1000
value: 64.44
- type: recall_at_20
value: 30.505
- type: recall_at_3
value: 17.134
- type: recall_at_5
value: 19.72
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 13.308
- type: map_at_1
value: 7.497
- type: map_at_10
value: 10.846
- type: map_at_100
value: 11.498999999999999
- type: map_at_1000
value: 11.618
- type: map_at_20
value: 11.161999999999999
- type: map_at_3
value: 9.658999999999999
- type: map_at_5
value: 10.298
- type: mrr_at_1
value: 9.11906400550585
- type: mrr_at_10
value: 12.993232392750626
- type: mrr_at_100
value: 13.701403675494117
- type: mrr_at_1000
value: 13.798101712770123
- type: mrr_at_20
value: 13.360764217937035
- type: mrr_at_3
value: 11.6655196145905
- type: mrr_at_5
value: 12.362353750860274
- type: nauc_map_at_1000_diff1
value: 29.030158454163164
- type: nauc_map_at_1000_max
value: 15.750545094681929
- type: nauc_map_at_1000_std
value: -3.0798436292807834
- type: nauc_map_at_100_diff1
value: 29.05038743174521
- type: nauc_map_at_100_max
value: 15.679082682471822
- type: nauc_map_at_100_std
value: -3.2003921265004855
- type: nauc_map_at_10_diff1
value: 29.680682239615308
- type: nauc_map_at_10_max
value: 15.532980267877802
- type: nauc_map_at_10_std
value: -3.622076099535413
- type: nauc_map_at_1_diff1
value: 37.49924172327444
- type: nauc_map_at_1_max
value: 14.852898999380606
- type: nauc_map_at_1_std
value: -3.8871845491808403
- type: nauc_map_at_20_diff1
value: 29.440127025124063
- type: nauc_map_at_20_max
value: 15.566926763278111
- type: nauc_map_at_20_std
value: -3.5118135905883445
- type: nauc_map_at_3_diff1
value: 31.87407675131833
- type: nauc_map_at_3_max
value: 16.133052442782088
- type: nauc_map_at_3_std
value: -3.7331459743832536
- type: nauc_map_at_5_diff1
value: 30.702048393849918
- type: nauc_map_at_5_max
value: 15.7292852737471
- type: nauc_map_at_5_std
value: -3.72714036461797
- type: nauc_mrr_at_1000_diff1
value: 27.069591144268795
- type: nauc_mrr_at_1000_max
value: 17.335323991978157
- type: nauc_mrr_at_1000_std
value: -2.1443215489774863
- type: nauc_mrr_at_100_diff1
value: 27.06995261671637
- type: nauc_mrr_at_100_max
value: 17.3285570198275
- type: nauc_mrr_at_100_std
value: -2.1819679734953903
- type: nauc_mrr_at_10_diff1
value: 27.57687228309106
- type: nauc_mrr_at_10_max
value: 17.166971785334017
- type: nauc_mrr_at_10_std
value: -2.6000743496984526
- type: nauc_mrr_at_1_diff1
value: 35.22676568917156
- type: nauc_mrr_at_1_max
value: 17.007211079819626
- type: nauc_mrr_at_1_std
value: -4.214696308727653
- type: nauc_mrr_at_20_diff1
value: 27.374588178560465
- type: nauc_mrr_at_20_max
value: 17.23758467893531
- type: nauc_mrr_at_20_std
value: -2.4124837810565603
- type: nauc_mrr_at_3_diff1
value: 29.722577971696918
- type: nauc_mrr_at_3_max
value: 18.07384167733403
- type: nauc_mrr_at_3_std
value: -3.003414797443647
- type: nauc_mrr_at_5_diff1
value: 28.45980370469956
- type: nauc_mrr_at_5_max
value: 17.511976658495847
- type: nauc_mrr_at_5_std
value: -2.5924858663986745
- type: nauc_ndcg_at_1000_diff1
value: 23.077231893052307
- type: nauc_ndcg_at_1000_max
value: 16.93593483664181
- type: nauc_ndcg_at_1000_std
value: 1.2092406562986315
- type: nauc_ndcg_at_100_diff1
value: 23.549727836162358
- type: nauc_ndcg_at_100_max
value: 15.750436011474273
- type: nauc_ndcg_at_100_std
value: -0.9019324316165611
- type: nauc_ndcg_at_10_diff1
value: 26.053761788639434
- type: nauc_ndcg_at_10_max
value: 15.3669306793647
- type: nauc_ndcg_at_10_std
value: -3.193779292269917
- type: nauc_ndcg_at_1_diff1
value: 35.22676568917156
- type: nauc_ndcg_at_1_max
value: 17.007211079819626
- type: nauc_ndcg_at_1_std
value: -4.214696308727653
- type: nauc_ndcg_at_20_diff1
value: 25.425326574435168
- type: nauc_ndcg_at_20_max
value: 15.385189154016906
- type: nauc_ndcg_at_20_std
value: -2.7870454259014545
- type: nauc_ndcg_at_3_diff1
value: 29.685264931512716
- type: nauc_ndcg_at_3_max
value: 17.07409526298788
- type: nauc_ndcg_at_3_std
value: -3.4063850629923293
- type: nauc_ndcg_at_5_diff1
value: 27.89860104840894
- type: nauc_ndcg_at_5_max
value: 15.996740122854927
- type: nauc_ndcg_at_5_std
value: -3.3146899970251873
- type: nauc_precision_at_1000_diff1
value: 6.214195083416471
- type: nauc_precision_at_1000_max
value: 24.273670809985404
- type: nauc_precision_at_1000_std
value: 17.553556491344104
- type: nauc_precision_at_100_diff1
value: 11.6615588663656
- type: nauc_precision_at_100_max
value: 20.59244105372682
- type: nauc_precision_at_100_std
value: 8.072189089366798
- type: nauc_precision_at_10_diff1
value: 18.279161444567706
- type: nauc_precision_at_10_max
value: 17.664508142320727
- type: nauc_precision_at_10_std
value: -1.0218966605840407
- type: nauc_precision_at_1_diff1
value: 35.22676568917156
- type: nauc_precision_at_1_max
value: 17.007211079819626
- type: nauc_precision_at_1_std
value: -4.214696308727653
- type: nauc_precision_at_20_diff1
value: 16.855549347544613
- type: nauc_precision_at_20_max
value: 18.640589054149743
- type: nauc_precision_at_20_std
value: 0.7553558754796067
- type: nauc_precision_at_3_diff1
value: 25.61293747306704
- type: nauc_precision_at_3_max
value: 20.254901193584562
- type: nauc_precision_at_3_std
value: -2.9517852127763153
- type: nauc_precision_at_5_diff1
value: 22.32451285561962
- type: nauc_precision_at_5_max
value: 18.709490300571886
- type: nauc_precision_at_5_std
value: -2.0702847848899615
- type: nauc_recall_at_1000_diff1
value: 8.102081393478185
- type: nauc_recall_at_1000_max
value: 17.111395305264892
- type: nauc_recall_at_1000_std
value: 14.340291614611578
- type: nauc_recall_at_100_diff1
value: 12.480368811829736
- type: nauc_recall_at_100_max
value: 12.879220685006636
- type: nauc_recall_at_100_std
value: 3.650162252310097
- type: nauc_recall_at_10_diff1
value: 19.461318204968205
- type: nauc_recall_at_10_max
value: 12.823289358103562
- type: nauc_recall_at_10_std
value: -3.1960264321653895
- type: nauc_recall_at_1_diff1
value: 37.49924172327444
- type: nauc_recall_at_1_max
value: 14.852898999380606
- type: nauc_recall_at_1_std
value: -3.8871845491808403
- type: nauc_recall_at_20_diff1
value: 17.698352862902524
- type: nauc_recall_at_20_max
value: 12.409413309293047
- type: nauc_recall_at_20_std
value: -2.0913697847507136
- type: nauc_recall_at_3_diff1
value: 26.236763474946116
- type: nauc_recall_at_3_max
value: 15.89287407458128
- type: nauc_recall_at_3_std
value: -3.776018275852628
- type: nauc_recall_at_5_diff1
value: 23.10472386873395
- type: nauc_recall_at_5_max
value: 14.09706657151941
- type: nauc_recall_at_5_std
value: -3.7053105237887296
- type: ndcg_at_1
value: 9.119
- type: ndcg_at_10
value: 13.308
- type: ndcg_at_100
value: 16.98
- type: ndcg_at_1000
value: 20.488
- type: ndcg_at_20
value: 14.455000000000002
- type: ndcg_at_3
value: 10.982
- type: ndcg_at_5
value: 12.003
- type: precision_at_1
value: 9.119
- type: precision_at_10
value: 2.4979999999999998
- type: precision_at_100
value: 0.519
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 1.5779999999999998
- type: precision_at_3
value: 5.288
- type: precision_at_5
value: 3.8890000000000002
- type: recall_at_1
value: 7.497
- type: recall_at_10
value: 18.817999999999998
- type: recall_at_100
value: 35.893
- type: recall_at_1000
value: 61.966
- type: recall_at_20
value: 23.017000000000003
- type: recall_at_3
value: 12.199
- type: recall_at_5
value: 14.87
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: main_score
value: 20.061999999999998
- type: map_at_1
value: 11.856
- type: map_at_10
value: 16.685
- type: map_at_100
value: 17.433
- type: map_at_1000
value: 17.558
- type: map_at_20
value: 17.041999999999998
- type: map_at_3
value: 15.021
- type: map_at_5
value: 15.931999999999999
- type: mrr_at_1
value: 14.17910447761194
- type: mrr_at_10
value: 19.398468964700307
- type: mrr_at_100
value: 20.153361230634783
- type: mrr_at_1000
value: 20.25140420668968
- type: mrr_at_20
value: 19.79354704809282
- type: mrr_at_3
value: 17.63059701492538
- type: mrr_at_5
value: 18.516791044776127
- type: nauc_map_at_1000_diff1
value: 39.29033459612684
- type: nauc_map_at_1000_max
value: 27.17416795511821
- type: nauc_map_at_1000_std
value: -6.92127611795475
- type: nauc_map_at_100_diff1
value: 39.32396099754708
- type: nauc_map_at_100_max
value: 27.09334212594238
- type: nauc_map_at_100_std
value: -7.039062385443858
- type: nauc_map_at_10_diff1
value: 39.94340086930468
- type: nauc_map_at_10_max
value: 27.423789336152417
- type: nauc_map_at_10_std
value: -7.508495669216843
- type: nauc_map_at_1_diff1
value: 47.64613699501138
- type: nauc_map_at_1_max
value: 31.632492599268748
- type: nauc_map_at_1_std
value: -7.883784832592304
- type: nauc_map_at_20_diff1
value: 39.45107288329592
- type: nauc_map_at_20_max
value: 27.15650902645131
- type: nauc_map_at_20_std
value: -7.301916707077087
- type: nauc_map_at_3_diff1
value: 41.801336320148984
- type: nauc_map_at_3_max
value: 28.342684341392683
- type: nauc_map_at_3_std
value: -8.213654438632787
- type: nauc_map_at_5_diff1
value: 40.973958128612786
- type: nauc_map_at_5_max
value: 28.355847958983126
- type: nauc_map_at_5_std
value: -7.204454459764011
- type: nauc_mrr_at_1000_diff1
value: 39.68737143543835
- type: nauc_mrr_at_1000_max
value: 28.74366308891808
- type: nauc_mrr_at_1000_std
value: -5.74519909264754
- type: nauc_mrr_at_100_diff1
value: 39.696965050178875
- type: nauc_mrr_at_100_max
value: 28.71065540406762
- type: nauc_mrr_at_100_std
value: -5.8117683155682895
- type: nauc_mrr_at_10_diff1
value: 40.22891666712493
- type: nauc_mrr_at_10_max
value: 28.97882832718155
- type: nauc_mrr_at_10_std
value: -6.167061574555064
- type: nauc_mrr_at_1_diff1
value: 48.39795549312159
- type: nauc_mrr_at_1_max
value: 33.31270433423697
- type: nauc_mrr_at_1_std
value: -5.8264509798445925
- type: nauc_mrr_at_20_diff1
value: 39.75516014377185
- type: nauc_mrr_at_20_max
value: 28.762238070807676
- type: nauc_mrr_at_20_std
value: -6.015233094372284
- type: nauc_mrr_at_3_diff1
value: 42.39647678330573
- type: nauc_mrr_at_3_max
value: 29.854246402890674
- type: nauc_mrr_at_3_std
value: -6.989062488249666
- type: nauc_mrr_at_5_diff1
value: 41.32547115377251
- type: nauc_mrr_at_5_max
value: 29.756253662694554
- type: nauc_mrr_at_5_std
value: -5.989324088608618
- type: nauc_ndcg_at_1000_diff1
value: 33.24611188020779
- type: nauc_ndcg_at_1000_max
value: 25.5685050419863
- type: nauc_ndcg_at_1000_std
value: -2.1838171971216838
- type: nauc_ndcg_at_100_diff1
value: 34.12429897480726
- type: nauc_ndcg_at_100_max
value: 24.386449655174115
- type: nauc_ndcg_at_100_std
value: -4.463092158837694
- type: nauc_ndcg_at_10_diff1
value: 36.7514146310574
- type: nauc_ndcg_at_10_max
value: 25.816604124438165
- type: nauc_ndcg_at_10_std
value: -6.864047505974296
- type: nauc_ndcg_at_1_diff1
value: 48.39795549312159
- type: nauc_ndcg_at_1_max
value: 33.31270433423697
- type: nauc_ndcg_at_1_std
value: -5.8264509798445925
- type: nauc_ndcg_at_20_diff1
value: 35.19768360191347
- type: nauc_ndcg_at_20_max
value: 25.02001675750392
- type: nauc_ndcg_at_20_std
value: -6.20782733166831
- type: nauc_ndcg_at_3_diff1
value: 40.154344522643925
- type: nauc_ndcg_at_3_max
value: 27.955302837392672
- type: nauc_ndcg_at_3_std
value: -7.6328532886404235
- type: nauc_ndcg_at_5_diff1
value: 38.743591122825606
- type: nauc_ndcg_at_5_max
value: 27.72241812814964
- type: nauc_ndcg_at_5_std
value: -6.257812072012101
- type: nauc_precision_at_1000_diff1
value: -3.9866748764702096
- type: nauc_precision_at_1000_max
value: 14.72470736881832
- type: nauc_precision_at_1000_std
value: 15.962534584653012
- type: nauc_precision_at_100_diff1
value: 14.40948301991166
- type: nauc_precision_at_100_max
value: 16.61733733078467
- type: nauc_precision_at_100_std
value: 6.847882296599798
- type: nauc_precision_at_10_diff1
value: 27.51873293006865
- type: nauc_precision_at_10_max
value: 22.893866555907746
- type: nauc_precision_at_10_std
value: -3.030805589162383
- type: nauc_precision_at_1_diff1
value: 48.39795549312159
- type: nauc_precision_at_1_max
value: 33.31270433423697
- type: nauc_precision_at_1_std
value: -5.8264509798445925
- type: nauc_precision_at_20_diff1
value: 22.56834807636722
- type: nauc_precision_at_20_max
value: 20.490661671424448
- type: nauc_precision_at_20_std
value: -0.660069645072748
- type: nauc_precision_at_3_diff1
value: 36.978184171791156
- type: nauc_precision_at_3_max
value: 26.478381926029265
- type: nauc_precision_at_3_std
value: -6.091960417034656
- type: nauc_precision_at_5_diff1
value: 33.58525371051779
- type: nauc_precision_at_5_max
value: 26.334754741578593
- type: nauc_precision_at_5_std
value: -3.154368502496007
- type: nauc_recall_at_1000_diff1
value: 5.958742292353638
- type: nauc_recall_at_1000_max
value: 15.864543076240528
- type: nauc_recall_at_1000_std
value: 21.86695402215286
- type: nauc_recall_at_100_diff1
value: 17.82865358233198
- type: nauc_recall_at_100_max
value: 13.118309558968022
- type: nauc_recall_at_100_std
value: 2.3032751559115114
- type: nauc_recall_at_10_diff1
value: 27.980644115353996
- type: nauc_recall_at_10_max
value: 19.39950863468228
- type: nauc_recall_at_10_std
value: -6.36618746193429
- type: nauc_recall_at_1_diff1
value: 47.64613699501138
- type: nauc_recall_at_1_max
value: 31.632492599268748
- type: nauc_recall_at_1_std
value: -7.883784832592304
- type: nauc_recall_at_20_diff1
value: 22.967595804626253
- type: nauc_recall_at_20_max
value: 16.693327271336244
- type: nauc_recall_at_20_std
value: -4.559238353011102
- type: nauc_recall_at_3_diff1
value: 35.41022087124811
- type: nauc_recall_at_3_max
value: 24.543890488663166
- type: nauc_recall_at_3_std
value: -8.200059552235023
- type: nauc_recall_at_5_diff1
value: 32.09822917090586
- type: nauc_recall_at_5_max
value: 23.82588196783892
- type: nauc_recall_at_5_std
value: -4.932704288647733
- type: ndcg_at_1
value: 14.179
- type: ndcg_at_10
value: 20.061999999999998
- type: ndcg_at_100
value: 24.149
- type: ndcg_at_1000
value: 27.644999999999996
- type: ndcg_at_20
value: 21.387999999999998
- type: ndcg_at_3
value: 16.794
- type: ndcg_at_5
value: 18.224
- type: precision_at_1
value: 14.179
- type: precision_at_10
value: 3.582
- type: precision_at_100
value: 0.623
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 2.1319999999999997
- type: precision_at_3
value: 7.774
- type: precision_at_5
value: 5.5969999999999995
- type: recall_at_1
value: 11.856
- type: recall_at_10
value: 27.778999999999996
- type: recall_at_100
value: 46.733000000000004
- type: recall_at_1000
value: 72.481
- type: recall_at_20
value: 32.737
- type: recall_at_3
value: 18.859
- type: recall_at_5
value: 22.435
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: main_score
value: 23.735999999999997
- type: map_at_1
value: 13.164000000000001
- type: map_at_10
value: 19.317999999999998
- type: map_at_100
value: 20.463
- type: map_at_1000
value: 20.646
- type: map_at_20
value: 19.808
- type: map_at_3
value: 17.126
- type: map_at_5
value: 18.056
- type: mrr_at_1
value: 16.600790513833992
- type: mrr_at_10
value: 22.620067130936693
- type: mrr_at_100
value: 23.601448756772193
- type: mrr_at_1000
value: 23.675507750087586
- type: mrr_at_20
value: 23.09510872850641
- type: mrr_at_3
value: 20.685111989459816
- type: mrr_at_5
value: 21.46574440052701
- type: nauc_map_at_1000_diff1
value: 38.04966249247377
- type: nauc_map_at_1000_max
value: 16.252263992463384
- type: nauc_map_at_1000_std
value: -1.7460502582062356
- type: nauc_map_at_100_diff1
value: 38.014610979412474
- type: nauc_map_at_100_max
value: 16.21534617931594
- type: nauc_map_at_100_std
value: -1.862936037740923
- type: nauc_map_at_10_diff1
value: 37.85306201039408
- type: nauc_map_at_10_max
value: 16.316152483605283
- type: nauc_map_at_10_std
value: -1.9300768321014996
- type: nauc_map_at_1_diff1
value: 46.32670783118563
- type: nauc_map_at_1_max
value: 19.162748070034993
- type: nauc_map_at_1_std
value: -7.2143378209361435
- type: nauc_map_at_20_diff1
value: 37.76015277914087
- type: nauc_map_at_20_max
value: 16.402558719060888
- type: nauc_map_at_20_std
value: -2.065612538672495
- type: nauc_map_at_3_diff1
value: 39.76679931113434
- type: nauc_map_at_3_max
value: 16.834290630961544
- type: nauc_map_at_3_std
value: -3.9003170439130335
- type: nauc_map_at_5_diff1
value: 39.03208154755538
- type: nauc_map_at_5_max
value: 16.225900244095133
- type: nauc_map_at_5_std
value: -2.4557998742917273
- type: nauc_mrr_at_1000_diff1
value: 37.458213267102465
- type: nauc_mrr_at_1000_max
value: 16.263132423271077
- type: nauc_mrr_at_1000_std
value: -0.6455583895471498
- type: nauc_mrr_at_100_diff1
value: 37.45543984270519
- type: nauc_mrr_at_100_max
value: 16.185738866185893
- type: nauc_mrr_at_100_std
value: -0.6962640945779722
- type: nauc_mrr_at_10_diff1
value: 37.16827089026705
- type: nauc_mrr_at_10_max
value: 15.901025716349201
- type: nauc_mrr_at_10_std
value: -0.6599647334904797
- type: nauc_mrr_at_1_diff1
value: 44.322572770568456
- type: nauc_mrr_at_1_max
value: 19.02126117731051
- type: nauc_mrr_at_1_std
value: -5.8998188281784625
- type: nauc_mrr_at_20_diff1
value: 37.24551389599038
- type: nauc_mrr_at_20_max
value: 16.113728443160127
- type: nauc_mrr_at_20_std
value: -0.8856480048238807
- type: nauc_mrr_at_3_diff1
value: 38.800389636963004
- type: nauc_mrr_at_3_max
value: 16.691447775512863
- type: nauc_mrr_at_3_std
value: -2.2008701696190474
- type: nauc_mrr_at_5_diff1
value: 38.17066041754819
- type: nauc_mrr_at_5_max
value: 15.854986493430074
- type: nauc_mrr_at_5_std
value: -1.3419132385788708
- type: nauc_ndcg_at_1000_diff1
value: 36.500354605077305
- type: nauc_ndcg_at_1000_max
value: 18.158853474546227
- type: nauc_ndcg_at_1000_std
value: 3.7042707188045783
- type: nauc_ndcg_at_100_diff1
value: 35.68797486655767
- type: nauc_ndcg_at_100_max
value: 15.949868116992763
- type: nauc_ndcg_at_100_std
value: 1.8743757496922573
- type: nauc_ndcg_at_10_diff1
value: 34.44579459042251
- type: nauc_ndcg_at_10_max
value: 14.976928472341097
- type: nauc_ndcg_at_10_std
value: 0.668632426387858
- type: nauc_ndcg_at_1_diff1
value: 44.322572770568456
- type: nauc_ndcg_at_1_max
value: 19.02126117731051
- type: nauc_ndcg_at_1_std
value: -5.8998188281784625
- type: nauc_ndcg_at_20_diff1
value: 34.47554348325645
- type: nauc_ndcg_at_20_max
value: 15.617518114283014
- type: nauc_ndcg_at_20_std
value: 0.23279335295578624
- type: nauc_ndcg_at_3_diff1
value: 37.34865309502302
- type: nauc_ndcg_at_3_max
value: 15.6035028610235
- type: nauc_ndcg_at_3_std
value: -2.042290469888462
- type: nauc_ndcg_at_5_diff1
value: 36.710946337067
- type: nauc_ndcg_at_5_max
value: 14.502265833101022
- type: nauc_ndcg_at_5_std
value: -0.26386753108907807
- type: nauc_precision_at_1000_diff1
value: 3.5611970722748056
- type: nauc_precision_at_1000_max
value: 6.9688736574296275
- type: nauc_precision_at_1000_std
value: 7.291986774352235
- type: nauc_precision_at_100_diff1
value: 18.866491470530185
- type: nauc_precision_at_100_max
value: 3.0721103361408497
- type: nauc_precision_at_100_std
value: 4.384934503700695
- type: nauc_precision_at_10_diff1
value: 20.850504784204883
- type: nauc_precision_at_10_max
value: 10.633189141801425
- type: nauc_precision_at_10_std
value: 5.014926409884033
- type: nauc_precision_at_1_diff1
value: 44.322572770568456
- type: nauc_precision_at_1_max
value: 19.02126117731051
- type: nauc_precision_at_1_std
value: -5.8998188281784625
- type: nauc_precision_at_20_diff1
value: 20.309109922155518
- type: nauc_precision_at_20_max
value: 9.029797084048417
- type: nauc_precision_at_20_std
value: 2.758218391395686
- type: nauc_precision_at_3_diff1
value: 30.196789766812422
- type: nauc_precision_at_3_max
value: 13.456577178909065
- type: nauc_precision_at_3_std
value: 0.49917879030090373
- type: nauc_precision_at_5_diff1
value: 27.706537485425653
- type: nauc_precision_at_5_max
value: 9.849229139569182
- type: nauc_precision_at_5_std
value: 3.685125093555483
- type: nauc_recall_at_1000_diff1
value: 33.96229420221514
- type: nauc_recall_at_1000_max
value: 37.16052892689619
- type: nauc_recall_at_1000_std
value: 36.18222346361014
- type: nauc_recall_at_100_diff1
value: 27.657710979013174
- type: nauc_recall_at_100_max
value: 15.352705013529967
- type: nauc_recall_at_100_std
value: 11.850919034123116
- type: nauc_recall_at_10_diff1
value: 25.46843551212912
- type: nauc_recall_at_10_max
value: 12.024769591895815
- type: nauc_recall_at_10_std
value: 5.710557786436904
- type: nauc_recall_at_1_diff1
value: 46.32670783118563
- type: nauc_recall_at_1_max
value: 19.162748070034993
- type: nauc_recall_at_1_std
value: -7.2143378209361435
- type: nauc_recall_at_20_diff1
value: 24.950754303786603
- type: nauc_recall_at_20_max
value: 13.779914894639022
- type: nauc_recall_at_20_std
value: 4.337235880676669
- type: nauc_recall_at_3_diff1
value: 33.979943512337485
- type: nauc_recall_at_3_max
value: 14.35407227008922
- type: nauc_recall_at_3_std
value: -0.5408111812033761
- type: nauc_recall_at_5_diff1
value: 31.887819659716687
- type: nauc_recall_at_5_max
value: 12.266354466300289
- type: nauc_recall_at_5_std
value: 3.67855636796736
- type: ndcg_at_1
value: 16.601
- type: ndcg_at_10
value: 23.735999999999997
- type: ndcg_at_100
value: 29.047
- type: ndcg_at_1000
value: 32.323
- type: ndcg_at_20
value: 25.222
- type: ndcg_at_3
value: 20.013
- type: ndcg_at_5
value: 21.165
- type: precision_at_1
value: 16.601
- type: precision_at_10
value: 4.7829999999999995
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.197
- type: precision_at_20
value: 3.0429999999999997
- type: precision_at_3
value: 9.881
- type: precision_at_5
value: 7.074999999999999
- type: recall_at_1
value: 13.164000000000001
- type: recall_at_10
value: 33.041
- type: recall_at_100
value: 57.907
- type: recall_at_1000
value: 79.887
- type: recall_at_20
value: 38.833
- type: recall_at_3
value: 21.397
- type: recall_at_5
value: 24.863
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 16.794999999999998
- type: map_at_1
value: 10.08
- type: map_at_10
value: 14.069
- type: map_at_100
value: 14.860000000000001
- type: map_at_1000
value: 14.968
- type: map_at_20
value: 14.46
- type: map_at_3
value: 12.498
- type: map_at_5
value: 13.324
- type: mrr_at_1
value: 10.905730129390019
- type: mrr_at_10
value: 15.199146201918854
- type: mrr_at_100
value: 16.00264496872985
- type: mrr_at_1000
value: 16.09501918722929
- type: mrr_at_20
value: 15.633768523540942
- type: mrr_at_3
value: 13.493530499075785
- type: mrr_at_5
value: 14.36229205175601
- type: nauc_map_at_1000_diff1
value: 22.950167181074935
- type: nauc_map_at_1000_max
value: 18.717980764527866
- type: nauc_map_at_1000_std
value: -6.25267811740101
- type: nauc_map_at_100_diff1
value: 22.94728125565202
- type: nauc_map_at_100_max
value: 18.719770177431155
- type: nauc_map_at_100_std
value: -6.323089529332934
- type: nauc_map_at_10_diff1
value: 22.346430545898126
- type: nauc_map_at_10_max
value: 18.80938448630523
- type: nauc_map_at_10_std
value: -7.0008855212089065
- type: nauc_map_at_1_diff1
value: 31.95272198051361
- type: nauc_map_at_1_max
value: 22.895259623649785
- type: nauc_map_at_1_std
value: -9.582498979740272
- type: nauc_map_at_20_diff1
value: 22.86393142972787
- type: nauc_map_at_20_max
value: 18.86264577450788
- type: nauc_map_at_20_std
value: -6.45412214287895
- type: nauc_map_at_3_diff1
value: 24.099754234032194
- type: nauc_map_at_3_max
value: 18.478412248275664
- type: nauc_map_at_3_std
value: -7.165377931835313
- type: nauc_map_at_5_diff1
value: 23.19897817392842
- type: nauc_map_at_5_max
value: 18.92826540423832
- type: nauc_map_at_5_std
value: -6.707296227198584
- type: nauc_mrr_at_1000_diff1
value: 23.213771617115064
- type: nauc_mrr_at_1000_max
value: 19.46803843401541
- type: nauc_mrr_at_1000_std
value: -6.593116817917101
- type: nauc_mrr_at_100_diff1
value: 23.231343638867212
- type: nauc_mrr_at_100_max
value: 19.452575181351783
- type: nauc_mrr_at_100_std
value: -6.626683471900298
- type: nauc_mrr_at_10_diff1
value: 22.605547224050298
- type: nauc_mrr_at_10_max
value: 19.467230968891098
- type: nauc_mrr_at_10_std
value: -7.304335909859951
- type: nauc_mrr_at_1_diff1
value: 32.21591155654977
- type: nauc_mrr_at_1_max
value: 23.898168032566968
- type: nauc_mrr_at_1_std
value: -10.113298227732622
- type: nauc_mrr_at_20_diff1
value: 23.17788912060599
- type: nauc_mrr_at_20_max
value: 19.681138842631395
- type: nauc_mrr_at_20_std
value: -6.668117181278914
- type: nauc_mrr_at_3_diff1
value: 24.324685622276508
- type: nauc_mrr_at_3_max
value: 19.28094175953585
- type: nauc_mrr_at_3_std
value: -7.896612175052549
- type: nauc_mrr_at_5_diff1
value: 23.56101870977645
- type: nauc_mrr_at_5_max
value: 19.830915115983956
- type: nauc_mrr_at_5_std
value: -7.247689969483312
- type: nauc_ndcg_at_1000_diff1
value: 21.101486527699198
- type: nauc_ndcg_at_1000_max
value: 17.661660378409593
- type: nauc_ndcg_at_1000_std
value: -1.627651235714167
- type: nauc_ndcg_at_100_diff1
value: 21.24378422898819
- type: nauc_ndcg_at_100_max
value: 17.493044854580774
- type: nauc_ndcg_at_100_std
value: -3.419151472965354
- type: nauc_ndcg_at_10_diff1
value: 18.656346406751783
- type: nauc_ndcg_at_10_max
value: 17.884063161669054
- type: nauc_ndcg_at_10_std
value: -6.3304637473674985
- type: nauc_ndcg_at_1_diff1
value: 32.21591155654977
- type: nauc_ndcg_at_1_max
value: 23.898168032566968
- type: nauc_ndcg_at_1_std
value: -10.113298227732622
- type: nauc_ndcg_at_20_diff1
value: 20.517191848764295
- type: nauc_ndcg_at_20_max
value: 18.302766567740257
- type: nauc_ndcg_at_20_std
value: -4.676348966303663
- type: nauc_ndcg_at_3_diff1
value: 22.229860548618376
- type: nauc_ndcg_at_3_max
value: 17.700425344082685
- type: nauc_ndcg_at_3_std
value: -6.599851166419227
- type: nauc_ndcg_at_5_diff1
value: 20.760917715244236
- type: nauc_ndcg_at_5_max
value: 18.320361121073617
- type: nauc_ndcg_at_5_std
value: -5.968352306934327
- type: nauc_precision_at_1000_diff1
value: 6.111781725558282
- type: nauc_precision_at_1000_max
value: 4.893420377600338
- type: nauc_precision_at_1000_std
value: 13.552656007673166
- type: nauc_precision_at_100_diff1
value: 16.174564725391278
- type: nauc_precision_at_100_max
value: 14.759102996929807
- type: nauc_precision_at_100_std
value: 6.644858850147021
- type: nauc_precision_at_10_diff1
value: 8.889821893924042
- type: nauc_precision_at_10_max
value: 15.574473888576575
- type: nauc_precision_at_10_std
value: -2.6115731810417366
- type: nauc_precision_at_1_diff1
value: 32.21591155654977
- type: nauc_precision_at_1_max
value: 23.898168032566968
- type: nauc_precision_at_1_std
value: -10.113298227732622
- type: nauc_precision_at_20_diff1
value: 14.776717379922587
- type: nauc_precision_at_20_max
value: 19.55219664568408
- type: nauc_precision_at_20_std
value: 2.8624434397265373
- type: nauc_precision_at_3_diff1
value: 17.24181833195652
- type: nauc_precision_at_3_max
value: 15.310985601785825
- type: nauc_precision_at_3_std
value: -5.815145792096017
- type: nauc_precision_at_5_diff1
value: 14.568702652383378
- type: nauc_precision_at_5_max
value: 16.90398092807837
- type: nauc_precision_at_5_std
value: -4.884555559489991
- type: nauc_recall_at_1000_diff1
value: 17.718608305964434
- type: nauc_recall_at_1000_max
value: 13.402668234081721
- type: nauc_recall_at_1000_std
value: 21.623779371422756
- type: nauc_recall_at_100_diff1
value: 18.932841874380454
- type: nauc_recall_at_100_max
value: 13.254799775623564
- type: nauc_recall_at_100_std
value: 4.592397886568707
- type: nauc_recall_at_10_diff1
value: 10.256753131266485
- type: nauc_recall_at_10_max
value: 15.34274332609289
- type: nauc_recall_at_10_std
value: -5.019100394026518
- type: nauc_recall_at_1_diff1
value: 31.95272198051361
- type: nauc_recall_at_1_max
value: 22.895259623649785
- type: nauc_recall_at_1_std
value: -9.582498979740272
- type: nauc_recall_at_20_diff1
value: 16.098225999062155
- type: nauc_recall_at_20_max
value: 16.11919310391389
- type: nauc_recall_at_20_std
value: -0.981856820033547
- type: nauc_recall_at_3_diff1
value: 16.896414167717293
- type: nauc_recall_at_3_max
value: 14.67655178851271
- type: nauc_recall_at_3_std
value: -4.885403738918622
- type: nauc_recall_at_5_diff1
value: 15.074392597620905
- type: nauc_recall_at_5_max
value: 16.457162195748644
- type: nauc_recall_at_5_std
value: -3.6534367499331046
- type: ndcg_at_1
value: 10.906
- type: ndcg_at_10
value: 16.794999999999998
- type: ndcg_at_100
value: 21.434
- type: ndcg_at_1000
value: 24.743000000000002
- type: ndcg_at_20
value: 18.275
- type: ndcg_at_3
value: 13.507
- type: ndcg_at_5
value: 14.953
- type: precision_at_1
value: 10.906
- type: precision_at_10
value: 2.791
- type: precision_at_100
value: 0.5559999999999999
- type: precision_at_1000
value: 0.091
- type: precision_at_20
value: 1.738
- type: precision_at_3
value: 5.545
- type: precision_at_5
value: 4.14
- type: recall_at_1
value: 10.08
- type: recall_at_10
value: 24.184
- type: recall_at_100
value: 46.967999999999996
- type: recall_at_1000
value: 72.92999999999999
- type: recall_at_20
value: 29.852
- type: recall_at_3
value: 15.440999999999999
- type: recall_at_5
value: 18.829
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 17.288999999999998
- type: map_at_1
value: 6.537
- type: map_at_10
value: 11.465
- type: map_at_100
value: 12.851
- type: map_at_1000
value: 13.045000000000002
- type: map_at_20
value: 12.174
- type: map_at_3
value: 9.369
- type: map_at_5
value: 10.331
- type: mrr_at_1
value: 15.2442996742671
- type: mrr_at_10
value: 23.59306654257793
- type: mrr_at_100
value: 24.771529453769823
- type: mrr_at_1000
value: 24.838895119526256
- type: mrr_at_20
value: 24.34915881726873
- type: mrr_at_3
value: 20.466883821932676
- type: mrr_at_5
value: 22.027144408251875
- type: nauc_map_at_1000_diff1
value: 21.34422077879759
- type: nauc_map_at_1000_max
value: 22.628208123980382
- type: nauc_map_at_1000_std
value: 15.80771024789922
- type: nauc_map_at_100_diff1
value: 21.373352148960333
- type: nauc_map_at_100_max
value: 22.445247482460697
- type: nauc_map_at_100_std
value: 15.551345921669244
- type: nauc_map_at_10_diff1
value: 22.093245216727393
- type: nauc_map_at_10_max
value: 20.71848879842843
- type: nauc_map_at_10_std
value: 13.073037988129768
- type: nauc_map_at_1_diff1
value: 32.56507685691908
- type: nauc_map_at_1_max
value: 19.299512363814912
- type: nauc_map_at_1_std
value: 7.980883065948159
- type: nauc_map_at_20_diff1
value: 21.612469499988222
- type: nauc_map_at_20_max
value: 21.70315933461587
- type: nauc_map_at_20_std
value: 14.51324386963804
- type: nauc_map_at_3_diff1
value: 22.671417020380986
- type: nauc_map_at_3_max
value: 18.10374651349345
- type: nauc_map_at_3_std
value: 9.73448791948781
- type: nauc_map_at_5_diff1
value: 22.034988196838064
- type: nauc_map_at_5_max
value: 18.490696961140145
- type: nauc_map_at_5_std
value: 11.001958112977931
- type: nauc_mrr_at_1000_diff1
value: 17.997877765827052
- type: nauc_mrr_at_1000_max
value: 23.761191320854795
- type: nauc_mrr_at_1000_std
value: 17.086288520129283
- type: nauc_mrr_at_100_diff1
value: 17.99589491236679
- type: nauc_mrr_at_100_max
value: 23.76386777696214
- type: nauc_mrr_at_100_std
value: 17.114923252433908
- type: nauc_mrr_at_10_diff1
value: 17.95028052166577
- type: nauc_mrr_at_10_max
value: 23.313446785613046
- type: nauc_mrr_at_10_std
value: 16.289313792057893
- type: nauc_mrr_at_1_diff1
value: 25.00794012521374
- type: nauc_mrr_at_1_max
value: 20.934023514536086
- type: nauc_mrr_at_1_std
value: 10.326842252115775
- type: nauc_mrr_at_20_diff1
value: 17.977173189525192
- type: nauc_mrr_at_20_max
value: 23.858084437038833
- type: nauc_mrr_at_20_std
value: 17.177629596269224
- type: nauc_mrr_at_3_diff1
value: 18.049118818264052
- type: nauc_mrr_at_3_max
value: 21.812245650122605
- type: nauc_mrr_at_3_std
value: 14.048078149579718
- type: nauc_mrr_at_5_diff1
value: 18.028877069283745
- type: nauc_mrr_at_5_max
value: 21.88620019054395
- type: nauc_mrr_at_5_std
value: 14.787661645971001
- type: nauc_ndcg_at_1000_diff1
value: 16.72726980659064
- type: nauc_ndcg_at_1000_max
value: 30.043672363788087
- type: nauc_ndcg_at_1000_std
value: 26.833584730455268
- type: nauc_ndcg_at_100_diff1
value: 17.16473243031922
- type: nauc_ndcg_at_100_max
value: 28.239622016125566
- type: nauc_ndcg_at_100_std
value: 24.469002695895977
- type: nauc_ndcg_at_10_diff1
value: 18.655890597433427
- type: nauc_ndcg_at_10_max
value: 23.63136724071696
- type: nauc_ndcg_at_10_std
value: 17.29295589103389
- type: nauc_ndcg_at_1_diff1
value: 25.00794012521374
- type: nauc_ndcg_at_1_max
value: 20.934023514536086
- type: nauc_ndcg_at_1_std
value: 10.326842252115775
- type: nauc_ndcg_at_20_diff1
value: 17.762757204969244
- type: nauc_ndcg_at_20_max
value: 25.946755000541476
- type: nauc_ndcg_at_20_std
value: 20.9523075152757
- type: nauc_ndcg_at_3_diff1
value: 18.258615831392746
- type: nauc_ndcg_at_3_max
value: 20.21498568651181
- type: nauc_ndcg_at_3_std
value: 12.588112301185989
- type: nauc_ndcg_at_5_diff1
value: 18.575198873459577
- type: nauc_ndcg_at_5_max
value: 19.821485190942443
- type: nauc_ndcg_at_5_std
value: 13.559611377687455
- type: nauc_precision_at_1000_diff1
value: -1.3591333339360123
- type: nauc_precision_at_1000_max
value: 33.01866225202323
- type: nauc_precision_at_1000_std
value: 38.26072433720804
- type: nauc_precision_at_100_diff1
value: 4.534183759090849
- type: nauc_precision_at_100_max
value: 35.499433595656335
- type: nauc_precision_at_100_std
value: 37.765227934597114
- type: nauc_precision_at_10_diff1
value: 11.369511250136568
- type: nauc_precision_at_10_max
value: 30.281092515358527
- type: nauc_precision_at_10_std
value: 26.690470077530847
- type: nauc_precision_at_1_diff1
value: 25.00794012521374
- type: nauc_precision_at_1_max
value: 20.934023514536086
- type: nauc_precision_at_1_std
value: 10.326842252115775
- type: nauc_precision_at_20_diff1
value: 8.133211694372351
- type: nauc_precision_at_20_max
value: 34.161055315782775
- type: nauc_precision_at_20_std
value: 33.33055010570849
- type: nauc_precision_at_3_diff1
value: 10.5682193001728
- type: nauc_precision_at_3_max
value: 22.786982248944767
- type: nauc_precision_at_3_std
value: 17.92766896610086
- type: nauc_precision_at_5_diff1
value: 10.940535871177055
- type: nauc_precision_at_5_max
value: 23.197073410356037
- type: nauc_precision_at_5_std
value: 20.612896217277573
- type: nauc_recall_at_1000_diff1
value: 5.540983045337761
- type: nauc_recall_at_1000_max
value: 37.3394645787145
- type: nauc_recall_at_1000_std
value: 43.905340993951555
- type: nauc_recall_at_100_diff1
value: 8.725053205627061
- type: nauc_recall_at_100_max
value: 29.46589116376182
- type: nauc_recall_at_100_std
value: 32.76739728784572
- type: nauc_recall_at_10_diff1
value: 13.519133005869758
- type: nauc_recall_at_10_max
value: 23.66746585259265
- type: nauc_recall_at_10_std
value: 19.744857128981092
- type: nauc_recall_at_1_diff1
value: 32.56507685691908
- type: nauc_recall_at_1_max
value: 19.299512363814912
- type: nauc_recall_at_1_std
value: 7.980883065948159
- type: nauc_recall_at_20_diff1
value: 10.866077600352101
- type: nauc_recall_at_20_max
value: 26.726876720649262
- type: nauc_recall_at_20_std
value: 26.28100368153264
- type: nauc_recall_at_3_diff1
value: 15.295338383488533
- type: nauc_recall_at_3_max
value: 18.013167170259173
- type: nauc_recall_at_3_std
value: 11.569701886642754
- type: nauc_recall_at_5_diff1
value: 14.214598780846863
- type: nauc_recall_at_5_max
value: 17.96550333772466
- type: nauc_recall_at_5_std
value: 13.720834673116972
- type: ndcg_at_1
value: 15.244
- type: ndcg_at_10
value: 17.288999999999998
- type: ndcg_at_100
value: 23.757
- type: ndcg_at_1000
value: 27.725
- type: ndcg_at_20
value: 19.686999999999998
- type: ndcg_at_3
value: 13.245000000000001
- type: ndcg_at_5
value: 14.485000000000001
- type: precision_at_1
value: 15.244
- type: precision_at_10
value: 5.733
- type: precision_at_100
value: 1.264
- type: precision_at_1000
value: 0.199
- type: precision_at_20
value: 3.85
- type: precision_at_3
value: 10.054
- type: precision_at_5
value: 7.9350000000000005
- type: recall_at_1
value: 6.537
- type: recall_at_10
value: 22.046
- type: recall_at_100
value: 44.818000000000005
- type: recall_at_1000
value: 67.676
- type: recall_at_20
value: 28.974
- type: recall_at_3
value: 12.232
- type: recall_at_5
value: 15.540999999999999
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 24.235
- type: map_at_1
value: 4.304
- type: map_at_10
value: 9.944
- type: map_at_100
value: 14.113000000000001
- type: map_at_1000
value: 15.085
- type: map_at_20
value: 11.594
- type: map_at_3
value: 7.228999999999999
- type: map_at_5
value: 8.368
- type: mrr_at_1
value: 43.0
- type: mrr_at_10
value: 53.30376984126983
- type: mrr_at_100
value: 53.97910163622114
- type: mrr_at_1000
value: 54.005267473599304
- type: mrr_at_20
value: 53.740161512249365
- type: mrr_at_3
value: 50.54166666666667
- type: mrr_at_5
value: 52.154166666666654
- type: nauc_map_at_1000_diff1
value: 26.809585057496545
- type: nauc_map_at_1000_max
value: 27.599866660752987
- type: nauc_map_at_1000_std
value: 31.459439584000094
- type: nauc_map_at_100_diff1
value: 27.049487336011836
- type: nauc_map_at_100_max
value: 25.112936840752
- type: nauc_map_at_100_std
value: 28.400137100413364
- type: nauc_map_at_10_diff1
value: 32.105246040146554
- type: nauc_map_at_10_max
value: 9.658311385867774
- type: nauc_map_at_10_std
value: 12.006591313970928
- type: nauc_map_at_1_diff1
value: 45.66826032911575
- type: nauc_map_at_1_max
value: 1.1005171486965344
- type: nauc_map_at_1_std
value: 3.2500050585955558
- type: nauc_map_at_20_diff1
value: 30.73734552740125
- type: nauc_map_at_20_max
value: 14.994971393610829
- type: nauc_map_at_20_std
value: 18.029603402042753
- type: nauc_map_at_3_diff1
value: 36.77585294977933
- type: nauc_map_at_3_max
value: 2.0123666749907034
- type: nauc_map_at_3_std
value: 3.1886056493854906
- type: nauc_map_at_5_diff1
value: 34.910885252980414
- type: nauc_map_at_5_max
value: 4.606898880177816
- type: nauc_map_at_5_std
value: 5.897472990222533
- type: nauc_mrr_at_1000_diff1
value: 32.8408203164654
- type: nauc_mrr_at_1000_max
value: 44.57916824429895
- type: nauc_mrr_at_1000_std
value: 25.76632603800019
- type: nauc_mrr_at_100_diff1
value: 32.83381181877902
- type: nauc_mrr_at_100_max
value: 44.57742098993615
- type: nauc_mrr_at_100_std
value: 25.763980866882193
- type: nauc_mrr_at_10_diff1
value: 32.85879447148161
- type: nauc_mrr_at_10_max
value: 44.587973042043814
- type: nauc_mrr_at_10_std
value: 25.548766798683893
- type: nauc_mrr_at_1_diff1
value: 36.064038704139605
- type: nauc_mrr_at_1_max
value: 43.188409566789346
- type: nauc_mrr_at_1_std
value: 24.26421817898062
- type: nauc_mrr_at_20_diff1
value: 32.752896264184685
- type: nauc_mrr_at_20_max
value: 44.56787283356919
- type: nauc_mrr_at_20_std
value: 25.763763879915313
- type: nauc_mrr_at_3_diff1
value: 33.265925003418126
- type: nauc_mrr_at_3_max
value: 43.98236209085194
- type: nauc_mrr_at_3_std
value: 24.811433062956347
- type: nauc_mrr_at_5_diff1
value: 33.02692454410134
- type: nauc_mrr_at_5_max
value: 44.02150946107612
- type: nauc_mrr_at_5_std
value: 24.414392179240878
- type: nauc_ndcg_at_1000_diff1
value: 29.071114816059023
- type: nauc_ndcg_at_1000_max
value: 38.90222092060964
- type: nauc_ndcg_at_1000_std
value: 44.44820451621514
- type: nauc_ndcg_at_100_diff1
value: 29.1316364198098
- type: nauc_ndcg_at_100_max
value: 31.558894971415064
- type: nauc_ndcg_at_100_std
value: 35.45395514581182
- type: nauc_ndcg_at_10_diff1
value: 29.303783217647744
- type: nauc_ndcg_at_10_max
value: 31.009718153863414
- type: nauc_ndcg_at_10_std
value: 27.49477754545124
- type: nauc_ndcg_at_1_diff1
value: 35.43480922848642
- type: nauc_ndcg_at_1_max
value: 30.475722281046714
- type: nauc_ndcg_at_1_std
value: 17.626646786380547
- type: nauc_ndcg_at_20_diff1
value: 29.30769894815147
- type: nauc_ndcg_at_20_max
value: 27.870710525324107
- type: nauc_ndcg_at_20_std
value: 28.334513734492532
- type: nauc_ndcg_at_3_diff1
value: 30.7536730308035
- type: nauc_ndcg_at_3_max
value: 32.32457811814772
- type: nauc_ndcg_at_3_std
value: 21.676427426548152
- type: nauc_ndcg_at_5_diff1
value: 29.96943892323901
- type: nauc_ndcg_at_5_max
value: 31.493512707920964
- type: nauc_ndcg_at_5_std
value: 24.0956693770445
- type: nauc_precision_at_1000_diff1
value: -5.720318672455256
- type: nauc_precision_at_1000_max
value: 28.08646209634404
- type: nauc_precision_at_1000_std
value: 29.34422238786186
- type: nauc_precision_at_100_diff1
value: 0.84607162708279
- type: nauc_precision_at_100_max
value: 47.97391409332498
- type: nauc_precision_at_100_std
value: 44.619521382937286
- type: nauc_precision_at_10_diff1
value: 9.622029967680373
- type: nauc_precision_at_10_max
value: 45.89203900455004
- type: nauc_precision_at_10_std
value: 38.276273021326745
- type: nauc_precision_at_1_diff1
value: 36.064038704139605
- type: nauc_precision_at_1_max
value: 43.188409566789346
- type: nauc_precision_at_1_std
value: 24.26421817898062
- type: nauc_precision_at_20_diff1
value: 6.709711811715244
- type: nauc_precision_at_20_max
value: 47.47318907005896
- type: nauc_precision_at_20_std
value: 42.595576770275095
- type: nauc_precision_at_3_diff1
value: 19.233575308317054
- type: nauc_precision_at_3_max
value: 43.02563765159987
- type: nauc_precision_at_3_std
value: 27.334254446564454
- type: nauc_precision_at_5_diff1
value: 14.298477498830673
- type: nauc_precision_at_5_max
value: 42.72631241492758
- type: nauc_precision_at_5_std
value: 32.14763584000337
- type: nauc_recall_at_1000_diff1
value: 18.551929022070503
- type: nauc_recall_at_1000_max
value: 25.99572596347025
- type: nauc_recall_at_1000_std
value: 49.479321187111644
- type: nauc_recall_at_100_diff1
value: 16.24655246342188
- type: nauc_recall_at_100_max
value: 19.193014693852824
- type: nauc_recall_at_100_std
value: 31.691642773148754
- type: nauc_recall_at_10_diff1
value: 21.181166055890365
- type: nauc_recall_at_10_max
value: -0.020533885799737757
- type: nauc_recall_at_10_std
value: 7.266191592314226
- type: nauc_recall_at_1_diff1
value: 45.66826032911575
- type: nauc_recall_at_1_max
value: 1.1005171486965344
- type: nauc_recall_at_1_std
value: 3.2500050585955558
- type: nauc_recall_at_20_diff1
value: 19.153797037751836
- type: nauc_recall_at_20_max
value: 3.9385573002743057
- type: nauc_recall_at_20_std
value: 14.048512138776442
- type: nauc_recall_at_3_diff1
value: 30.240078354763085
- type: nauc_recall_at_3_max
value: -4.0841121814480195
- type: nauc_recall_at_3_std
value: -2.3759344889809264
- type: nauc_recall_at_5_diff1
value: 26.22489817092464
- type: nauc_recall_at_5_max
value: -3.2396073154699256
- type: nauc_recall_at_5_std
value: -0.1327990827712389
- type: ndcg_at_1
value: 31.5
- type: ndcg_at_10
value: 24.235
- type: ndcg_at_100
value: 28.01
- type: ndcg_at_1000
value: 34.724
- type: ndcg_at_20
value: 24.265
- type: ndcg_at_3
value: 26.682
- type: ndcg_at_5
value: 25.249
- type: precision_at_1
value: 43.0
- type: precision_at_10
value: 21.65
- type: precision_at_100
value: 6.97
- type: precision_at_1000
value: 1.4449999999999998
- type: precision_at_20
value: 16.6
- type: precision_at_3
value: 32.25
- type: precision_at_5
value: 27.250000000000004
- type: recall_at_1
value: 4.304
- type: recall_at_10
value: 15.014
- type: recall_at_100
value: 35.115
- type: recall_at_1000
value: 58.52
- type: recall_at_20
value: 20.817
- type: recall_at_3
value: 8.698
- type: recall_at_5
value: 11.052
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.09
- type: f1
value: 41.3731018097549
- type: f1_weighted
value: 47.129694558751545
- type: main_score
value: 45.09
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 30.267
- type: map_at_1
value: 16.349
- type: map_at_10
value: 24.917
- type: map_at_100
value: 26.003
- type: map_at_1000
value: 26.072
- type: map_at_20
value: 25.558999999999997
- type: map_at_3
value: 22.067999999999998
- type: map_at_5
value: 23.610999999999997
- type: mrr_at_1
value: 17.416741674167415
- type: mrr_at_10
value: 26.439929707256365
- type: mrr_at_100
value: 27.508820939687954
- type: mrr_at_1000
value: 27.570352489203128
- type: mrr_at_20
value: 27.08319436248233
- type: mrr_at_3
value: 23.422342234223358
- type: mrr_at_5
value: 25.06350635063509
- type: nauc_map_at_1000_diff1
value: 21.773223671090857
- type: nauc_map_at_1000_max
value: 6.412897130218669
- type: nauc_map_at_1000_std
value: -6.3221009008493745
- type: nauc_map_at_100_diff1
value: 21.76483868507978
- type: nauc_map_at_100_max
value: 6.404365200549758
- type: nauc_map_at_100_std
value: -6.342840969370927
- type: nauc_map_at_10_diff1
value: 21.669481996014238
- type: nauc_map_at_10_max
value: 6.019531738681224
- type: nauc_map_at_10_std
value: -6.941777440293395
- type: nauc_map_at_1_diff1
value: 27.706382248361393
- type: nauc_map_at_1_max
value: 4.030610814398596
- type: nauc_map_at_1_std
value: -9.782554832619702
- type: nauc_map_at_20_diff1
value: 21.80535156700929
- type: nauc_map_at_20_max
value: 6.361714278006344
- type: nauc_map_at_20_std
value: -6.513790702798104
- type: nauc_map_at_3_diff1
value: 23.017059605983857
- type: nauc_map_at_3_max
value: 5.110304244032051
- type: nauc_map_at_3_std
value: -8.069547854658104
- type: nauc_map_at_5_diff1
value: 21.927491204194766
- type: nauc_map_at_5_max
value: 5.462525780765053
- type: nauc_map_at_5_std
value: -7.474340804858998
- type: nauc_mrr_at_1000_diff1
value: 21.61235920652557
- type: nauc_mrr_at_1000_max
value: 6.6996553488043915
- type: nauc_mrr_at_1000_std
value: -6.520954496784069
- type: nauc_mrr_at_100_diff1
value: 21.597831485534126
- type: nauc_mrr_at_100_max
value: 6.705135295195408
- type: nauc_mrr_at_100_std
value: -6.521597409657566
- type: nauc_mrr_at_10_diff1
value: 21.404259600861597
- type: nauc_mrr_at_10_max
value: 6.348078634441438
- type: nauc_mrr_at_10_std
value: -7.012906818443071
- type: nauc_mrr_at_1_diff1
value: 27.231264207663248
- type: nauc_mrr_at_1_max
value: 4.04888129901842
- type: nauc_mrr_at_1_std
value: -9.998368133129015
- type: nauc_mrr_at_20_diff1
value: 21.57543681953314
- type: nauc_mrr_at_20_max
value: 6.670007051575425
- type: nauc_mrr_at_20_std
value: -6.636382948186316
- type: nauc_mrr_at_3_diff1
value: 22.771758514181627
- type: nauc_mrr_at_3_max
value: 5.389600538667887
- type: nauc_mrr_at_3_std
value: -8.189661361743667
- type: nauc_mrr_at_5_diff1
value: 21.689397986510446
- type: nauc_mrr_at_5_max
value: 5.765658649049543
- type: nauc_mrr_at_5_std
value: -7.590205788150704
- type: nauc_ndcg_at_1000_diff1
value: 19.780729881850963
- type: nauc_ndcg_at_1000_max
value: 8.968522119658385
- type: nauc_ndcg_at_1000_std
value: -2.425269449284083
- type: nauc_ndcg_at_100_diff1
value: 19.46657224380776
- type: nauc_ndcg_at_100_max
value: 9.05883201318058
- type: nauc_ndcg_at_100_std
value: -2.5565659351523293
- type: nauc_ndcg_at_10_diff1
value: 19.29152253186839
- type: nauc_ndcg_at_10_max
value: 7.499062048205841
- type: nauc_ndcg_at_10_std
value: -5.2482566392088685
- type: nauc_ndcg_at_1_diff1
value: 27.231264207663248
- type: nauc_ndcg_at_1_max
value: 4.04888129901842
- type: nauc_ndcg_at_1_std
value: -9.998368133129015
- type: nauc_ndcg_at_20_diff1
value: 19.71545443537324
- type: nauc_ndcg_at_20_max
value: 8.64504551388718
- type: nauc_ndcg_at_20_std
value: -3.7667113417348976
- type: nauc_ndcg_at_3_diff1
value: 21.745216173844803
- type: nauc_ndcg_at_3_max
value: 5.650727598972489
- type: nauc_ndcg_at_3_std
value: -7.481336986244201
- type: nauc_ndcg_at_5_diff1
value: 19.936133837204203
- type: nauc_ndcg_at_5_max
value: 6.259916537058443
- type: nauc_ndcg_at_5_std
value: -6.484388158971839
- type: nauc_precision_at_1000_diff1
value: 1.471146535072958
- type: nauc_precision_at_1000_max
value: 20.630906784097483
- type: nauc_precision_at_1000_std
value: 21.9773366010731
- type: nauc_precision_at_100_diff1
value: 7.533964401054148
- type: nauc_precision_at_100_max
value: 19.925643661900423
- type: nauc_precision_at_100_std
value: 15.336729247195924
- type: nauc_precision_at_10_diff1
value: 12.150440335935734
- type: nauc_precision_at_10_max
value: 11.983854268540387
- type: nauc_precision_at_10_std
value: -0.37221151434129196
- type: nauc_precision_at_1_diff1
value: 27.231264207663248
- type: nauc_precision_at_1_max
value: 4.04888129901842
- type: nauc_precision_at_1_std
value: -9.998368133129015
- type: nauc_precision_at_20_diff1
value: 12.630450311503752
- type: nauc_precision_at_20_max
value: 16.05605149278296
- type: nauc_precision_at_20_std
value: 5.3999355877921165
- type: nauc_precision_at_3_diff1
value: 18.359563527526568
- type: nauc_precision_at_3_max
value: 7.050702808245418
- type: nauc_precision_at_3_std
value: -6.012052050420314
- type: nauc_precision_at_5_diff1
value: 14.398743831406193
- type: nauc_precision_at_5_max
value: 8.47645601614165
- type: nauc_precision_at_5_std
value: -4.017240645221931
- type: nauc_recall_at_1000_diff1
value: 7.839541590866944
- type: nauc_recall_at_1000_max
value: 23.309619602703478
- type: nauc_recall_at_1000_std
value: 27.804864458508405
- type: nauc_recall_at_100_diff1
value: 9.97691215791031
- type: nauc_recall_at_100_max
value: 18.819153599870717
- type: nauc_recall_at_100_std
value: 14.458117071228108
- type: nauc_recall_at_10_diff1
value: 12.810432997078946
- type: nauc_recall_at_10_max
value: 10.766544057766287
- type: nauc_recall_at_10_std
value: -0.5969028921503585
- type: nauc_recall_at_1_diff1
value: 27.706382248361393
- type: nauc_recall_at_1_max
value: 4.030610814398596
- type: nauc_recall_at_1_std
value: -9.782554832619702
- type: nauc_recall_at_20_diff1
value: 13.595110328407126
- type: nauc_recall_at_20_max
value: 14.757809231376443
- type: nauc_recall_at_20_std
value: 4.9020894617594575
- type: nauc_recall_at_3_diff1
value: 18.603105066886183
- type: nauc_recall_at_3_max
value: 6.695351132956627
- type: nauc_recall_at_3_std
value: -5.761401766506087
- type: nauc_recall_at_5_diff1
value: 14.770731919705574
- type: nauc_recall_at_5_max
value: 7.754748009508286
- type: nauc_recall_at_5_std
value: -3.7961358195332773
- type: ndcg_at_1
value: 17.416999999999998
- type: ndcg_at_10
value: 30.267
- type: ndcg_at_100
value: 35.650999999999996
- type: ndcg_at_1000
value: 37.57
- type: ndcg_at_20
value: 32.574
- type: ndcg_at_3
value: 24.303
- type: ndcg_at_5
value: 27.099
- type: precision_at_1
value: 17.416999999999998
- type: precision_at_10
value: 4.9590000000000005
- type: precision_at_100
value: 0.7799999999999999
- type: precision_at_1000
value: 0.096
- type: precision_at_20
value: 2.9819999999999998
- type: precision_at_3
value: 10.536
- type: precision_at_5
value: 7.807
- type: recall_at_1
value: 16.349
- type: recall_at_10
value: 45.678999999999995
- type: recall_at_100
value: 70.541
- type: recall_at_1000
value: 85.36500000000001
- type: recall_at_20
value: 54.541
- type: recall_at_3
value: 29.42
- type: recall_at_5
value: 36.112
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 16.619
- type: map_at_1
value: 7.478999999999999
- type: map_at_10
value: 11.933
- type: map_at_100
value: 13.078000000000001
- type: map_at_1000
value: 13.267999999999999
- type: map_at_20
value: 12.465
- type: map_at_3
value: 9.975000000000001
- type: map_at_5
value: 10.928
- type: mrr_at_1
value: 14.660493827160495
- type: mrr_at_10
value: 20.737250146972368
- type: mrr_at_100
value: 21.718558761167632
- type: mrr_at_1000
value: 21.808600465854973
- type: mrr_at_20
value: 21.221196101889976
- type: mrr_at_3
value: 18.569958847736622
- type: mrr_at_5
value: 19.557613168724284
- type: nauc_map_at_1000_diff1
value: 21.51431734644358
- type: nauc_map_at_1000_max
value: 4.931074809601008
- type: nauc_map_at_1000_std
value: -3.3303160557020033
- type: nauc_map_at_100_diff1
value: 21.38249575770264
- type: nauc_map_at_100_max
value: 4.725930298940441
- type: nauc_map_at_100_std
value: -3.4448477852279473
- type: nauc_map_at_10_diff1
value: 21.195172969735484
- type: nauc_map_at_10_max
value: 4.412691847045547
- type: nauc_map_at_10_std
value: -4.350074377307911
- type: nauc_map_at_1_diff1
value: 28.103238263092063
- type: nauc_map_at_1_max
value: 6.669837188399256
- type: nauc_map_at_1_std
value: -4.3658897905036405
- type: nauc_map_at_20_diff1
value: 21.489132375885042
- type: nauc_map_at_20_max
value: 4.303022314751493
- type: nauc_map_at_20_std
value: -4.17992541434375
- type: nauc_map_at_3_diff1
value: 22.237087711122065
- type: nauc_map_at_3_max
value: 4.533442194144081
- type: nauc_map_at_3_std
value: -5.4916480142821635
- type: nauc_map_at_5_diff1
value: 21.876772694300065
- type: nauc_map_at_5_max
value: 4.511112176374985
- type: nauc_map_at_5_std
value: -5.176150118472554
- type: nauc_mrr_at_1000_diff1
value: 22.783625924297894
- type: nauc_mrr_at_1000_max
value: 5.601679998803955
- type: nauc_mrr_at_1000_std
value: -7.3878080622090865
- type: nauc_mrr_at_100_diff1
value: 22.729460521696915
- type: nauc_mrr_at_100_max
value: 5.57805664833725
- type: nauc_mrr_at_100_std
value: -7.3741470356357945
- type: nauc_mrr_at_10_diff1
value: 22.92977199129734
- type: nauc_mrr_at_10_max
value: 5.36088601159652
- type: nauc_mrr_at_10_std
value: -7.875413563795927
- type: nauc_mrr_at_1_diff1
value: 28.31095482042955
- type: nauc_mrr_at_1_max
value: 7.815000197077026
- type: nauc_mrr_at_1_std
value: -7.957538731368522
- type: nauc_mrr_at_20_diff1
value: 22.946584920142406
- type: nauc_mrr_at_20_max
value: 5.384498887828733
- type: nauc_mrr_at_20_std
value: -7.633579657779428
- type: nauc_mrr_at_3_diff1
value: 23.46361356498147
- type: nauc_mrr_at_3_max
value: 4.50117125788086
- type: nauc_mrr_at_3_std
value: -8.902224452227653
- type: nauc_mrr_at_5_diff1
value: 23.331352654582094
- type: nauc_mrr_at_5_max
value: 4.978873752458006
- type: nauc_mrr_at_5_std
value: -8.93749978655238
- type: nauc_ndcg_at_1000_diff1
value: 19.87039469365751
- type: nauc_ndcg_at_1000_max
value: 8.696714614408632
- type: nauc_ndcg_at_1000_std
value: 1.9681923697039077
- type: nauc_ndcg_at_100_diff1
value: 18.868322837780532
- type: nauc_ndcg_at_100_max
value: 6.0333062132177675
- type: nauc_ndcg_at_100_std
value: 0.44045929715801535
- type: nauc_ndcg_at_10_diff1
value: 19.727068370792786
- type: nauc_ndcg_at_10_max
value: 4.277512828410901
- type: nauc_ndcg_at_10_std
value: -4.086859790177703
- type: nauc_ndcg_at_1_diff1
value: 28.31095482042955
- type: nauc_ndcg_at_1_max
value: 7.815000197077026
- type: nauc_ndcg_at_1_std
value: -7.957538731368522
- type: nauc_ndcg_at_20_diff1
value: 20.29147215834196
- type: nauc_ndcg_at_20_max
value: 4.095649235859702
- type: nauc_ndcg_at_20_std
value: -3.35870597862009
- type: nauc_ndcg_at_3_diff1
value: 21.821928240162936
- type: nauc_ndcg_at_3_max
value: 4.480256449572136
- type: nauc_ndcg_at_3_std
value: -7.852741840584263
- type: nauc_ndcg_at_5_diff1
value: 21.15156996884851
- type: nauc_ndcg_at_5_max
value: 4.290200639355712
- type: nauc_ndcg_at_5_std
value: -6.820305338379054
- type: nauc_precision_at_1000_diff1
value: 8.075302805866599
- type: nauc_precision_at_1000_max
value: 19.944406193476624
- type: nauc_precision_at_1000_std
value: 7.381890177301082
- type: nauc_precision_at_100_diff1
value: 11.601078456057651
- type: nauc_precision_at_100_max
value: 13.628171798745194
- type: nauc_precision_at_100_std
value: 5.64401780985023
- type: nauc_precision_at_10_diff1
value: 16.653551040271243
- type: nauc_precision_at_10_max
value: 6.546264597330201
- type: nauc_precision_at_10_std
value: -4.71713361654603
- type: nauc_precision_at_1_diff1
value: 28.31095482042955
- type: nauc_precision_at_1_max
value: 7.815000197077026
- type: nauc_precision_at_1_std
value: -7.957538731368522
- type: nauc_precision_at_20_diff1
value: 17.066402720849883
- type: nauc_precision_at_20_max
value: 6.178677607606832
- type: nauc_precision_at_20_std
value: -3.987829586084965
- type: nauc_precision_at_3_diff1
value: 18.358060169256518
- type: nauc_precision_at_3_max
value: 3.326657304001109
- type: nauc_precision_at_3_std
value: -10.729398884603352
- type: nauc_precision_at_5_diff1
value: 19.41722339541596
- type: nauc_precision_at_5_max
value: 5.714829813319856
- type: nauc_precision_at_5_std
value: -8.915414021584194
- type: nauc_recall_at_1000_diff1
value: 9.365082280755011
- type: nauc_recall_at_1000_max
value: 15.829818126823215
- type: nauc_recall_at_1000_std
value: 27.360808820832666
- type: nauc_recall_at_100_diff1
value: 8.05391879951721
- type: nauc_recall_at_100_max
value: 5.285477600522065
- type: nauc_recall_at_100_std
value: 13.239431098719457
- type: nauc_recall_at_10_diff1
value: 13.288596558862537
- type: nauc_recall_at_10_max
value: 1.9512189235666242
- type: nauc_recall_at_10_std
value: 0.08420098367582614
- type: nauc_recall_at_1_diff1
value: 28.103238263092063
- type: nauc_recall_at_1_max
value: 6.669837188399256
- type: nauc_recall_at_1_std
value: -4.3658897905036405
- type: nauc_recall_at_20_diff1
value: 14.781087409113736
- type: nauc_recall_at_20_max
value: 1.6715579437911525
- type: nauc_recall_at_20_std
value: 1.4885011649849296
- type: nauc_recall_at_3_diff1
value: 16.904223069103445
- type: nauc_recall_at_3_max
value: 1.2031021965601998
- type: nauc_recall_at_3_std
value: -5.7358517453558395
- type: nauc_recall_at_5_diff1
value: 15.560583779980208
- type: nauc_recall_at_5_max
value: 1.268944483676161
- type: nauc_recall_at_5_std
value: -5.114882384179444
- type: ndcg_at_1
value: 14.66
- type: ndcg_at_10
value: 16.619
- type: ndcg_at_100
value: 22.467000000000002
- type: ndcg_at_1000
value: 26.745
- type: ndcg_at_20
value: 18.356
- type: ndcg_at_3
value: 13.547
- type: ndcg_at_5
value: 14.466999999999999
- type: precision_at_1
value: 14.66
- type: precision_at_10
value: 4.8149999999999995
- type: precision_at_100
value: 1.0619999999999998
- type: precision_at_1000
value: 0.182
- type: precision_at_20
value: 3.071
- type: precision_at_3
value: 9.002
- type: precision_at_5
value: 6.79
- type: recall_at_1
value: 7.478999999999999
- type: recall_at_10
value: 21.884
- type: recall_at_100
value: 45.545
- type: recall_at_1000
value: 71.887
- type: recall_at_20
value: 27.567999999999998
- type: recall_at_3
value: 12.485
- type: recall_at_5
value: 15.862000000000002
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 36.217
- type: map_at_1
value: 20.628
- type: map_at_10
value: 28.559
- type: map_at_100
value: 29.5
- type: map_at_1000
value: 29.601
- type: map_at_20
value: 29.069
- type: map_at_3
value: 26.429000000000002
- type: map_at_5
value: 27.589000000000002
- type: mrr_at_1
value: 41.2559081701553
- type: mrr_at_10
value: 48.84337052399182
- type: mrr_at_100
value: 49.523346087979284
- type: mrr_at_1000
value: 49.56958885341236
- type: mrr_at_20
value: 49.24793448550151
- type: mrr_at_3
value: 46.893990546927924
- type: mrr_at_5
value: 48.02430790006756
- type: nauc_map_at_1000_diff1
value: 47.360168970984724
- type: nauc_map_at_1000_max
value: 24.614881662381816
- type: nauc_map_at_1000_std
value: 7.361001821254585
- type: nauc_map_at_100_diff1
value: 47.364333667549126
- type: nauc_map_at_100_max
value: 24.59919582686935
- type: nauc_map_at_100_std
value: 7.30629187742088
- type: nauc_map_at_10_diff1
value: 47.72981170600924
- type: nauc_map_at_10_max
value: 24.438913671717863
- type: nauc_map_at_10_std
value: 6.344771843030873
- type: nauc_map_at_1_diff1
value: 60.38112885477367
- type: nauc_map_at_1_max
value: 25.9097175050165
- type: nauc_map_at_1_std
value: 1.6564371988429167
- type: nauc_map_at_20_diff1
value: 47.57684884180127
- type: nauc_map_at_20_max
value: 24.499763513475443
- type: nauc_map_at_20_std
value: 6.846169751546589
- type: nauc_map_at_3_diff1
value: 49.86374782865936
- type: nauc_map_at_3_max
value: 24.885292020762233
- type: nauc_map_at_3_std
value: 4.8258321037343075
- type: nauc_map_at_5_diff1
value: 48.41433187485084
- type: nauc_map_at_5_max
value: 24.439622781310288
- type: nauc_map_at_5_std
value: 5.664110533938225
- type: nauc_mrr_at_1000_diff1
value: 56.730426912840926
- type: nauc_mrr_at_1000_max
value: 25.303184184778832
- type: nauc_mrr_at_1000_std
value: 4.096788282752593
- type: nauc_mrr_at_100_diff1
value: 56.72217642846328
- type: nauc_mrr_at_100_max
value: 25.302090289174313
- type: nauc_mrr_at_100_std
value: 4.108586907297719
- type: nauc_mrr_at_10_diff1
value: 56.738023427066885
- type: nauc_mrr_at_10_max
value: 25.271616491844455
- type: nauc_mrr_at_10_std
value: 3.824908381559653
- type: nauc_mrr_at_1_diff1
value: 60.38112885477367
- type: nauc_mrr_at_1_max
value: 25.9097175050165
- type: nauc_mrr_at_1_std
value: 1.6564371988429167
- type: nauc_mrr_at_20_diff1
value: 56.70644340159845
- type: nauc_mrr_at_20_max
value: 25.27993872890672
- type: nauc_mrr_at_20_std
value: 4.0064390570846875
- type: nauc_mrr_at_3_diff1
value: 57.245840183280194
- type: nauc_mrr_at_3_max
value: 25.33525251108163
- type: nauc_mrr_at_3_std
value: 2.9291934957523584
- type: nauc_mrr_at_5_diff1
value: 56.755596718387125
- type: nauc_mrr_at_5_max
value: 25.22311364368114
- type: nauc_mrr_at_5_std
value: 3.5613271952141865
- type: nauc_ndcg_at_1000_diff1
value: 46.553394894195456
- type: nauc_ndcg_at_1000_max
value: 24.938550469205936
- type: nauc_ndcg_at_1000_std
value: 11.539278224453703
- type: nauc_ndcg_at_100_diff1
value: 46.60518292153804
- type: nauc_ndcg_at_100_max
value: 24.724969691359487
- type: nauc_ndcg_at_100_std
value: 10.73834721703669
- type: nauc_ndcg_at_10_diff1
value: 48.12092181292035
- type: nauc_ndcg_at_10_max
value: 24.2791002435645
- type: nauc_ndcg_at_10_std
value: 7.153695707296072
- type: nauc_ndcg_at_1_diff1
value: 60.38112885477367
- type: nauc_ndcg_at_1_max
value: 25.9097175050165
- type: nauc_ndcg_at_1_std
value: 1.6564371988429167
- type: nauc_ndcg_at_20_diff1
value: 47.65117800859018
- type: nauc_ndcg_at_20_max
value: 24.357451369693482
- type: nauc_ndcg_at_20_std
value: 8.469581027730795
- type: nauc_ndcg_at_3_diff1
value: 51.08303103543016
- type: nauc_ndcg_at_3_max
value: 24.799424583706255
- type: nauc_ndcg_at_3_std
value: 4.63909501741516
- type: nauc_ndcg_at_5_diff1
value: 49.136821889915225
- type: nauc_ndcg_at_5_max
value: 24.243099266851612
- type: nauc_ndcg_at_5_std
value: 5.961841495442629
- type: nauc_precision_at_1000_diff1
value: 14.823992446535481
- type: nauc_precision_at_1000_max
value: 17.957974549199044
- type: nauc_precision_at_1000_std
value: 31.79928156519854
- type: nauc_precision_at_100_diff1
value: 23.121894912525356
- type: nauc_precision_at_100_max
value: 19.166436915427486
- type: nauc_precision_at_100_std
value: 23.79964191034748
- type: nauc_precision_at_10_diff1
value: 35.6440151764581
- type: nauc_precision_at_10_max
value: 21.022400502868223
- type: nauc_precision_at_10_std
value: 11.461152130387351
- type: nauc_precision_at_1_diff1
value: 60.38112885477367
- type: nauc_precision_at_1_max
value: 25.9097175050165
- type: nauc_precision_at_1_std
value: 1.6564371988429167
- type: nauc_precision_at_20_diff1
value: 31.893138428309527
- type: nauc_precision_at_20_max
value: 19.961827091439737
- type: nauc_precision_at_20_std
value: 15.056260461619232
- type: nauc_precision_at_3_diff1
value: 45.06971180999361
- type: nauc_precision_at_3_max
value: 23.635891515921788
- type: nauc_precision_at_3_std
value: 6.198234444102806
- type: nauc_precision_at_5_diff1
value: 39.43842818627394
- type: nauc_precision_at_5_max
value: 21.623592109687603
- type: nauc_precision_at_5_std
value: 8.718348302717638
- type: nauc_recall_at_1000_diff1
value: 14.823992446535502
- type: nauc_recall_at_1000_max
value: 17.95797454919907
- type: nauc_recall_at_1000_std
value: 31.799281565198577
- type: nauc_recall_at_100_diff1
value: 23.121894912525338
- type: nauc_recall_at_100_max
value: 19.16643691542745
- type: nauc_recall_at_100_std
value: 23.799641910347454
- type: nauc_recall_at_10_diff1
value: 35.64401517645808
- type: nauc_recall_at_10_max
value: 21.022400502868223
- type: nauc_recall_at_10_std
value: 11.461152130387346
- type: nauc_recall_at_1_diff1
value: 60.38112885477367
- type: nauc_recall_at_1_max
value: 25.9097175050165
- type: nauc_recall_at_1_std
value: 1.6564371988429167
- type: nauc_recall_at_20_diff1
value: 31.89313842830953
- type: nauc_recall_at_20_max
value: 19.961827091439776
- type: nauc_recall_at_20_std
value: 15.05626046161922
- type: nauc_recall_at_3_diff1
value: 45.06971180999365
- type: nauc_recall_at_3_max
value: 23.6358915159218
- type: nauc_recall_at_3_std
value: 6.198234444102802
- type: nauc_recall_at_5_diff1
value: 39.43842818627392
- type: nauc_recall_at_5_max
value: 21.623592109687596
- type: nauc_recall_at_5_std
value: 8.71834830271761
- type: ndcg_at_1
value: 41.256
- type: ndcg_at_10
value: 36.217
- type: ndcg_at_100
value: 40.422000000000004
- type: ndcg_at_1000
value: 42.762
- type: ndcg_at_20
value: 37.801
- type: ndcg_at_3
value: 32.275999999999996
- type: ndcg_at_5
value: 34.184
- type: precision_at_1
value: 41.256
- type: precision_at_10
value: 7.838000000000001
- type: precision_at_100
value: 1.119
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_20
value: 4.429
- type: precision_at_3
value: 20.207
- type: precision_at_5
value: 13.636999999999999
- type: recall_at_1
value: 20.628
- type: recall_at_10
value: 39.190000000000005
- type: recall_at_100
value: 55.962
- type: recall_at_1000
value: 71.56700000000001
- type: recall_at_20
value: 44.288
- type: recall_at_3
value: 30.311
- type: recall_at_5
value: 34.092
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 70.78
- type: ap
value: 65.09281598781793
- type: ap_weighted
value: 65.09281598781793
- type: f1
value: 70.56498155979408
- type: f1_weighted
value: 70.56498155979408
- type: main_score
value: 70.78
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: test
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 34.981
- type: map_at_1
value: 0.9369999999999999
- type: map_at_10
value: 6.105
- type: map_at_100
value: 16.573
- type: map_at_1000
value: 20.952
- type: map_at_20
value: 9.495000000000001
- type: map_at_3
value: 2.429
- type: map_at_5
value: 3.7199999999999998
- type: mrr_at_1
value: 55.81395348837209
- type: mrr_at_10
value: 68.06201550387597
- type: mrr_at_100
value: 68.1915571731129
- type: mrr_at_1000
value: 68.20171255038517
- type: mrr_at_20
value: 68.06201550387597
- type: mrr_at_3
value: 65.89147286821705
- type: mrr_at_5
value: 67.05426356589147
- type: nauc_map_at_1000_diff1
value: 18.395978949265306
- type: nauc_map_at_1000_max
value: 65.4845955483722
- type: nauc_map_at_1000_std
value: 60.01425674651855
- type: nauc_map_at_100_diff1
value: 17.66459171040137
- type: nauc_map_at_100_max
value: 56.91214775388199
- type: nauc_map_at_100_std
value: 51.26999006986676
- type: nauc_map_at_10_diff1
value: 16.954292128521953
- type: nauc_map_at_10_max
value: 29.470502786246144
- type: nauc_map_at_10_std
value: 26.609751637393327
- type: nauc_map_at_1_diff1
value: 10.947697022780028
- type: nauc_map_at_1_max
value: 11.333211449460881
- type: nauc_map_at_1_std
value: 19.475048420924633
- type: nauc_map_at_20_diff1
value: 13.788525799384063
- type: nauc_map_at_20_max
value: 36.86668066777578
- type: nauc_map_at_20_std
value: 31.64971965701265
- type: nauc_map_at_3_diff1
value: 17.859630126844696
- type: nauc_map_at_3_max
value: 21.46834280704547
- type: nauc_map_at_3_std
value: 21.076387895251823
- type: nauc_map_at_5_diff1
value: 20.17441650295119
- type: nauc_map_at_5_max
value: 24.878188082696866
- type: nauc_map_at_5_std
value: 25.307502719861176
- type: nauc_mrr_at_1000_diff1
value: 14.192749126463891
- type: nauc_mrr_at_1000_max
value: 52.54526357757101
- type: nauc_mrr_at_1000_std
value: 44.496694053499596
- type: nauc_mrr_at_100_diff1
value: 14.215939043892334
- type: nauc_mrr_at_100_max
value: 52.564251294672225
- type: nauc_mrr_at_100_std
value: 44.51890218594217
- type: nauc_mrr_at_10_diff1
value: 14.433120969285195
- type: nauc_mrr_at_10_max
value: 52.78365722715205
- type: nauc_mrr_at_10_std
value: 44.72011559301776
- type: nauc_mrr_at_1_diff1
value: 4.7355957804700415
- type: nauc_mrr_at_1_max
value: 39.93352486009351
- type: nauc_mrr_at_1_std
value: 39.55801119967461
- type: nauc_mrr_at_20_diff1
value: 14.433120969285195
- type: nauc_mrr_at_20_max
value: 52.78365722715205
- type: nauc_mrr_at_20_std
value: 44.72011559301776
- type: nauc_mrr_at_3_diff1
value: 13.11183382637074
- type: nauc_mrr_at_3_max
value: 51.12370908328734
- type: nauc_mrr_at_3_std
value: 40.238401804460075
- type: nauc_mrr_at_5_diff1
value: 13.179254658692855
- type: nauc_mrr_at_5_max
value: 53.38265101836388
- type: nauc_mrr_at_5_std
value: 44.541370972177624
- type: nauc_ndcg_at_1000_diff1
value: 21.69587945916941
- type: nauc_ndcg_at_1000_max
value: 63.37066645313249
- type: nauc_ndcg_at_1000_std
value: 62.97303091219909
- type: nauc_ndcg_at_100_diff1
value: 14.796314010328851
- type: nauc_ndcg_at_100_max
value: 58.71101997436683
- type: nauc_ndcg_at_100_std
value: 56.81420228421644
- type: nauc_ndcg_at_10_diff1
value: 3.194403093296008
- type: nauc_ndcg_at_10_max
value: 48.55754387196878
- type: nauc_ndcg_at_10_std
value: 47.48615570741263
- type: nauc_ndcg_at_1_diff1
value: -6.148169734658873
- type: nauc_ndcg_at_1_max
value: 25.556355503841665
- type: nauc_ndcg_at_1_std
value: 21.48805389151005
- type: nauc_ndcg_at_20_diff1
value: 4.461683170351035
- type: nauc_ndcg_at_20_max
value: 56.88294190421313
- type: nauc_ndcg_at_20_std
value: 51.93821404537562
- type: nauc_ndcg_at_3_diff1
value: -2.861880240597804
- type: nauc_ndcg_at_3_max
value: 41.33450475096539
- type: nauc_ndcg_at_3_std
value: 37.27470370159716
- type: nauc_ndcg_at_5_diff1
value: 0.08149020695323854
- type: nauc_ndcg_at_5_max
value: 46.722954751612264
- type: nauc_ndcg_at_5_std
value: 44.665247293303416
- type: nauc_precision_at_1000_diff1
value: 6.514642381748156
- type: nauc_precision_at_1000_max
value: 54.61143553569596
- type: nauc_precision_at_1000_std
value: 51.84636945565138
- type: nauc_precision_at_100_diff1
value: 9.181266993927007
- type: nauc_precision_at_100_max
value: 63.29553111429812
- type: nauc_precision_at_100_std
value: 59.013060721871035
- type: nauc_precision_at_10_diff1
value: 16.062673027273505
- type: nauc_precision_at_10_max
value: 64.85826828536602
- type: nauc_precision_at_10_std
value: 58.476222375984
- type: nauc_precision_at_1_diff1
value: 4.7355957804700415
- type: nauc_precision_at_1_max
value: 39.93352486009351
- type: nauc_precision_at_1_std
value: 39.55801119967461
- type: nauc_precision_at_20_diff1
value: 12.061096674017728
- type: nauc_precision_at_20_max
value: 66.81322466200473
- type: nauc_precision_at_20_std
value: 58.18606533749746
- type: nauc_precision_at_3_diff1
value: 9.10289433878097
- type: nauc_precision_at_3_max
value: 61.00901833818042
- type: nauc_precision_at_3_std
value: 52.94626237786338
- type: nauc_precision_at_5_diff1
value: 13.765083369324818
- type: nauc_precision_at_5_max
value: 67.0735717931603
- type: nauc_precision_at_5_std
value: 60.160759158192334
- type: nauc_recall_at_1000_diff1
value: 33.378885488094184
- type: nauc_recall_at_1000_max
value: 58.97167459966026
- type: nauc_recall_at_1000_std
value: 59.59218645358476
- type: nauc_recall_at_100_diff1
value: 25.1307767949282
- type: nauc_recall_at_100_max
value: 48.29698220976826
- type: nauc_recall_at_100_std
value: 44.76527467601765
- type: nauc_recall_at_10_diff1
value: 21.012536607264714
- type: nauc_recall_at_10_max
value: 21.719714919287135
- type: nauc_recall_at_10_std
value: 18.503987452436643
- type: nauc_recall_at_1_diff1
value: 10.947697022780028
- type: nauc_recall_at_1_max
value: 11.333211449460881
- type: nauc_recall_at_1_std
value: 19.475048420924633
- type: nauc_recall_at_20_diff1
value: 14.221666924930961
- type: nauc_recall_at_20_max
value: 30.83326629354958
- type: nauc_recall_at_20_std
value: 25.419400751031635
- type: nauc_recall_at_3_diff1
value: 19.488515137385438
- type: nauc_recall_at_3_max
value: 18.682366339227507
- type: nauc_recall_at_3_std
value: 14.801487977327957
- type: nauc_recall_at_5_diff1
value: 21.493404372645262
- type: nauc_recall_at_5_max
value: 22.470910257369972
- type: nauc_recall_at_5_std
value: 20.91789333035049
- type: ndcg_at_1
value: 36.047000000000004
- type: ndcg_at_10
value: 34.981
- type: ndcg_at_100
value: 33.928000000000004
- type: ndcg_at_1000
value: 42.553999999999995
- type: ndcg_at_20
value: 33.768
- type: ndcg_at_3
value: 35.477
- type: ndcg_at_5
value: 35.54
- type: precision_at_1
value: 55.814
- type: precision_at_10
value: 46.744
- type: precision_at_100
value: 22.721
- type: precision_at_1000
value: 4.781
- type: precision_at_20
value: 40.465
- type: precision_at_3
value: 52.713
- type: precision_at_5
value: 51.163000000000004
- type: recall_at_1
value: 0.9369999999999999
- type: recall_at_10
value: 7.921
- type: recall_at_100
value: 28.903000000000002
- type: recall_at_1000
value: 53.691
- type: recall_at_20
value: 12.745000000000001
- type: recall_at_3
value: 2.8240000000000003
- type: recall_at_5
value: 4.476999999999999
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.95576835385319
- type: f1
value: 88.06364678376042
- type: f1_weighted
value: 89.00721562093213
- type: main_score
value: 88.95576835385319
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 56.99726402188783
- type: f1
value: 38.19916053247397
- type: f1_weighted
value: 59.96788951671549
- type: main_score
value: 56.99726402188783
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 63.79287155346336
- type: f1
value: 61.634629394462934
- type: f1_weighted
value: 62.567311481126055
- type: main_score
value: 63.79287155346336
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 70.30934767989241
- type: f1
value: 68.77914761769517
- type: f1_weighted
value: 70.1128179307388
- type: main_score
value: 70.30934767989241
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 27.61734940907637
- type: v_measure
value: 27.61734940907637
- type: v_measure_std
value: 1.2248100208316097
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 23.802943866708308
- type: v_measure
value: 23.802943866708308
- type: v_measure_std
value: 1.4975518910969763
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 29.431722284942175
- type: map
value: 29.431722284942175
- type: mrr
value: 30.207239990924332
- type: nAUC_map_diff1
value: 8.996546748314882
- type: nAUC_map_max
value: -23.177815249478726
- type: nAUC_map_std
value: -8.953694065964015
- type: nAUC_mrr_diff1
value: 9.247690774332192
- type: nAUC_mrr_max
value: -17.42779158552557
- type: nAUC_mrr_std
value: -5.997215692334967
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 24.267
- type: map_at_1
value: 3.479
- type: map_at_10
value: 7.603
- type: map_at_100
value: 9.725999999999999
- type: map_at_1000
value: 10.84
- type: map_at_20
value: 8.458
- type: map_at_3
value: 5.844
- type: map_at_5
value: 6.732
- type: mrr_at_1
value: 33.746130030959755
- type: mrr_at_10
value: 43.515897587105016
- type: mrr_at_100
value: 44.1900925310943
- type: mrr_at_1000
value: 44.248355412773655
- type: mrr_at_20
value: 43.868459509915866
- type: mrr_at_3
value: 41.74406604747161
- type: mrr_at_5
value: 42.82765737874097
- type: nauc_map_at_1000_diff1
value: 34.88971488841416
- type: nauc_map_at_1000_max
value: 31.233839968277195
- type: nauc_map_at_1000_std
value: 17.992857492799814
- type: nauc_map_at_100_diff1
value: 36.76693324709909
- type: nauc_map_at_100_max
value: 29.86086979425915
- type: nauc_map_at_100_std
value: 13.839419605590217
- type: nauc_map_at_10_diff1
value: 41.84259867098214
- type: nauc_map_at_10_max
value: 25.879197474145045
- type: nauc_map_at_10_std
value: 5.172621372587683
- type: nauc_map_at_1_diff1
value: 59.30631217950276
- type: nauc_map_at_1_max
value: 20.33548433428363
- type: nauc_map_at_1_std
value: -1.8217254079917093
- type: nauc_map_at_20_diff1
value: 38.95414455683049
- type: nauc_map_at_20_max
value: 26.987123257006363
- type: nauc_map_at_20_std
value: 8.70109669516395
- type: nauc_map_at_3_diff1
value: 47.18504542973307
- type: nauc_map_at_3_max
value: 21.706151469833202
- type: nauc_map_at_3_std
value: 0.8205050181794802
- type: nauc_map_at_5_diff1
value: 45.415931092144476
- type: nauc_map_at_5_max
value: 23.366427326413234
- type: nauc_map_at_5_std
value: 2.036343948136038
- type: nauc_mrr_at_1000_diff1
value: 34.09352814360173
- type: nauc_mrr_at_1000_max
value: 36.57744406738573
- type: nauc_mrr_at_1000_std
value: 18.874642200828255
- type: nauc_mrr_at_100_diff1
value: 34.07606233752646
- type: nauc_mrr_at_100_max
value: 36.570920987632604
- type: nauc_mrr_at_100_std
value: 18.90704866545748
- type: nauc_mrr_at_10_diff1
value: 33.86749261732675
- type: nauc_mrr_at_10_max
value: 36.53445713485045
- type: nauc_mrr_at_10_std
value: 18.72635222657426
- type: nauc_mrr_at_1_diff1
value: 38.310753456104415
- type: nauc_mrr_at_1_max
value: 32.080433604684444
- type: nauc_mrr_at_1_std
value: 10.76705379557832
- type: nauc_mrr_at_20_diff1
value: 34.05889362360272
- type: nauc_mrr_at_20_max
value: 36.539902847898894
- type: nauc_mrr_at_20_std
value: 18.829170969376136
- type: nauc_mrr_at_3_diff1
value: 34.661230693226
- type: nauc_mrr_at_3_max
value: 35.27494037957078
- type: nauc_mrr_at_3_std
value: 16.799715396839538
- type: nauc_mrr_at_5_diff1
value: 34.30568391918026
- type: nauc_mrr_at_5_max
value: 36.31513238612551
- type: nauc_mrr_at_5_std
value: 18.248879043938977
- type: nauc_ndcg_at_1000_diff1
value: 28.625594076978317
- type: nauc_ndcg_at_1000_max
value: 39.10317925519372
- type: nauc_ndcg_at_1000_std
value: 28.285055860454257
- type: nauc_ndcg_at_100_diff1
value: 27.620568325357986
- type: nauc_ndcg_at_100_max
value: 34.32867733567831
- type: nauc_ndcg_at_100_std
value: 25.103257804738867
- type: nauc_ndcg_at_10_diff1
value: 24.527566945282576
- type: nauc_ndcg_at_10_max
value: 32.19051221282665
- type: nauc_ndcg_at_10_std
value: 25.403501921327432
- type: nauc_ndcg_at_1_diff1
value: 38.95386802348185
- type: nauc_ndcg_at_1_max
value: 30.134605059752644
- type: nauc_ndcg_at_1_std
value: 11.904644683131
- type: nauc_ndcg_at_20_diff1
value: 25.422544698266798
- type: nauc_ndcg_at_20_max
value: 31.85394200124836
- type: nauc_ndcg_at_20_std
value: 26.925279769256523
- type: nauc_ndcg_at_3_diff1
value: 27.968874988258573
- type: nauc_ndcg_at_3_max
value: 30.93696431950224
- type: nauc_ndcg_at_3_std
value: 18.551823245893114
- type: nauc_ndcg_at_5_diff1
value: 25.722349682774233
- type: nauc_ndcg_at_5_max
value: 32.29294830500251
- type: nauc_ndcg_at_5_std
value: 21.309663190563718
- type: nauc_precision_at_1000_diff1
value: -7.466934392543785
- type: nauc_precision_at_1000_max
value: 17.534662065944236
- type: nauc_precision_at_1000_std
value: 43.86335465977071
- type: nauc_precision_at_100_diff1
value: -2.073530455550674
- type: nauc_precision_at_100_max
value: 26.51626141328235
- type: nauc_precision_at_100_std
value: 47.02741717034574
- type: nauc_precision_at_10_diff1
value: 6.717006995188633
- type: nauc_precision_at_10_max
value: 32.738691529253494
- type: nauc_precision_at_10_std
value: 35.80103442917034
- type: nauc_precision_at_1_diff1
value: 38.310753456104415
- type: nauc_precision_at_1_max
value: 32.080433604684444
- type: nauc_precision_at_1_std
value: 10.76705379557832
- type: nauc_precision_at_20_diff1
value: 2.745832502363386
- type: nauc_precision_at_20_max
value: 30.954145690157688
- type: nauc_precision_at_20_std
value: 41.74795596694651
- type: nauc_precision_at_3_diff1
value: 20.04271494210498
- type: nauc_precision_at_3_max
value: 32.49798591360355
- type: nauc_precision_at_3_std
value: 22.433174666547337
- type: nauc_precision_at_5_diff1
value: 13.559244763754297
- type: nauc_precision_at_5_max
value: 34.29174467545541
- type: nauc_precision_at_5_std
value: 27.67088510253159
- type: nauc_recall_at_1000_diff1
value: 14.406899781864585
- type: nauc_recall_at_1000_max
value: 18.63293041982341
- type: nauc_recall_at_1000_std
value: 14.873113563587054
- type: nauc_recall_at_100_diff1
value: 20.276630820341023
- type: nauc_recall_at_100_max
value: 20.74130868375551
- type: nauc_recall_at_100_std
value: 14.253807947296465
- type: nauc_recall_at_10_diff1
value: 32.131322772361194
- type: nauc_recall_at_10_max
value: 21.834619003317645
- type: nauc_recall_at_10_std
value: 5.111047982154726
- type: nauc_recall_at_1_diff1
value: 59.30631217950276
- type: nauc_recall_at_1_max
value: 20.33548433428363
- type: nauc_recall_at_1_std
value: -1.8217254079917093
- type: nauc_recall_at_20_diff1
value: 29.009526186873646
- type: nauc_recall_at_20_max
value: 19.222693262075214
- type: nauc_recall_at_20_std
value: 8.263428180065297
- type: nauc_recall_at_3_diff1
value: 38.428506196942266
- type: nauc_recall_at_3_max
value: 18.92885903756039
- type: nauc_recall_at_3_std
value: 2.2767688747391106
- type: nauc_recall_at_5_diff1
value: 35.93597428489607
- type: nauc_recall_at_5_max
value: 19.591607144107787
- type: nauc_recall_at_5_std
value: 2.110828447844176
- type: ndcg_at_1
value: 31.424000000000003
- type: ndcg_at_10
value: 24.267
- type: ndcg_at_100
value: 22.416
- type: ndcg_at_1000
value: 31.165
- type: ndcg_at_20
value: 22.698
- type: ndcg_at_3
value: 28.349999999999998
- type: ndcg_at_5
value: 26.596999999999998
- type: precision_at_1
value: 33.745999999999995
- type: precision_at_10
value: 18.173000000000002
- type: precision_at_100
value: 6.142
- type: precision_at_1000
value: 1.856
- type: precision_at_20
value: 13.808000000000002
- type: precision_at_3
value: 27.141
- type: precision_at_5
value: 22.91
- type: recall_at_1
value: 3.479
- type: recall_at_10
value: 10.838000000000001
- type: recall_at_100
value: 23.817
- type: recall_at_1000
value: 54.910000000000004
- type: recall_at_20
value: 14.201
- type: recall_at_3
value: 7.236
- type: recall_at_5
value: 9.003
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 19.543
- type: map_at_1
value: 8.413
- type: map_at_10
value: 15.137
- type: map_at_100
value: 16.393
- type: map_at_1000
value: 16.492
- type: map_at_20
value: 15.827
- type: map_at_3
value: 12.584999999999999
- type: map_at_5
value: 13.963000000000001
- type: mrr_at_1
value: 9.73348783314021
- type: mrr_at_10
value: 16.79895712630359
- type: mrr_at_100
value: 17.96527488497497
- type: mrr_at_1000
value: 18.049284621380956
- type: mrr_at_20
value: 17.456541969883244
- type: mrr_at_3
value: 14.2429509463113
- type: mrr_at_5
value: 15.636346079567373
- type: nauc_map_at_1000_diff1
value: 18.819971639310904
- type: nauc_map_at_1000_max
value: 13.814947350680912
- type: nauc_map_at_1000_std
value: 2.521914759184715
- type: nauc_map_at_100_diff1
value: 18.814255883152295
- type: nauc_map_at_100_max
value: 13.784098474987728
- type: nauc_map_at_100_std
value: 2.463386644603925
- type: nauc_map_at_10_diff1
value: 18.859741700546
- type: nauc_map_at_10_max
value: 13.200112454161522
- type: nauc_map_at_10_std
value: 1.2838729142015952
- type: nauc_map_at_1_diff1
value: 22.792911666175435
- type: nauc_map_at_1_max
value: 9.420966909430586
- type: nauc_map_at_1_std
value: -2.177707391834426
- type: nauc_map_at_20_diff1
value: 18.857585870077603
- type: nauc_map_at_20_max
value: 13.494371000020585
- type: nauc_map_at_20_std
value: 1.7987081767888724
- type: nauc_map_at_3_diff1
value: 20.3919043114244
- type: nauc_map_at_3_max
value: 11.229233328712159
- type: nauc_map_at_3_std
value: -0.38627708043707826
- type: nauc_map_at_5_diff1
value: 19.354241266183816
- type: nauc_map_at_5_max
value: 12.050995012138287
- type: nauc_map_at_5_std
value: 0.4619900683963445
- type: nauc_mrr_at_1000_diff1
value: 17.44597143162577
- type: nauc_mrr_at_1000_max
value: 12.99325734801233
- type: nauc_mrr_at_1000_std
value: 3.843471729334042
- type: nauc_mrr_at_100_diff1
value: 17.435646674940784
- type: nauc_mrr_at_100_max
value: 12.977733602157626
- type: nauc_mrr_at_100_std
value: 3.819688827654704
- type: nauc_mrr_at_10_diff1
value: 17.366258247556274
- type: nauc_mrr_at_10_max
value: 12.525863095955028
- type: nauc_mrr_at_10_std
value: 2.9586217333067033
- type: nauc_mrr_at_1_diff1
value: 21.181200992092933
- type: nauc_mrr_at_1_max
value: 9.071174422547715
- type: nauc_mrr_at_1_std
value: 0.37666341313223156
- type: nauc_mrr_at_20_diff1
value: 17.47842029246494
- type: nauc_mrr_at_20_max
value: 12.782728137865854
- type: nauc_mrr_at_20_std
value: 3.335207400639897
- type: nauc_mrr_at_3_diff1
value: 18.51145002403263
- type: nauc_mrr_at_3_max
value: 10.835289485126742
- type: nauc_mrr_at_3_std
value: 1.9317890085586098
- type: nauc_mrr_at_5_diff1
value: 17.85072852768249
- type: nauc_mrr_at_5_max
value: 11.48513938150474
- type: nauc_mrr_at_5_std
value: 2.42459300983239
- type: nauc_ndcg_at_1000_diff1
value: 16.90906471124972
- type: nauc_ndcg_at_1000_max
value: 18.10309890125217
- type: nauc_ndcg_at_1000_std
value: 9.531587494208333
- type: nauc_ndcg_at_100_diff1
value: 16.794610031459452
- type: nauc_ndcg_at_100_max
value: 17.320423121617587
- type: nauc_ndcg_at_100_std
value: 8.36089871892644
- type: nauc_ndcg_at_10_diff1
value: 16.9238328483549
- type: nauc_ndcg_at_10_max
value: 15.003898384476175
- type: nauc_ndcg_at_10_std
value: 3.220068514580869
- type: nauc_ndcg_at_1_diff1
value: 21.181200992092933
- type: nauc_ndcg_at_1_max
value: 9.071174422547715
- type: nauc_ndcg_at_1_std
value: 0.37666341313223156
- type: nauc_ndcg_at_20_diff1
value: 17.122783032672636
- type: nauc_ndcg_at_20_max
value: 15.811529036192868
- type: nauc_ndcg_at_20_std
value: 4.638881062044276
- type: nauc_ndcg_at_3_diff1
value: 19.397651629456085
- type: nauc_ndcg_at_3_max
value: 11.519185092964664
- type: nauc_ndcg_at_3_std
value: 0.5852664941054009
- type: nauc_ndcg_at_5_diff1
value: 17.836092374281833
- type: nauc_ndcg_at_5_max
value: 12.692159310256345
- type: nauc_ndcg_at_5_std
value: 1.7356004993081944
- type: nauc_precision_at_1000_diff1
value: 3.073453832047264
- type: nauc_precision_at_1000_max
value: 23.790855697865958
- type: nauc_precision_at_1000_std
value: 32.57511127212919
- type: nauc_precision_at_100_diff1
value: 9.127444700503846
- type: nauc_precision_at_100_max
value: 22.71156118580008
- type: nauc_precision_at_100_std
value: 24.63648530454141
- type: nauc_precision_at_10_diff1
value: 13.02401021030829
- type: nauc_precision_at_10_max
value: 18.85263386483255
- type: nauc_precision_at_10_std
value: 8.373513612599647
- type: nauc_precision_at_1_diff1
value: 21.181200992092933
- type: nauc_precision_at_1_max
value: 9.071174422547715
- type: nauc_precision_at_1_std
value: 0.37666341313223156
- type: nauc_precision_at_20_diff1
value: 12.975989332948448
- type: nauc_precision_at_20_max
value: 20.296858370304385
- type: nauc_precision_at_20_std
value: 12.119876359299383
- type: nauc_precision_at_3_diff1
value: 17.130641156396027
- type: nauc_precision_at_3_max
value: 12.010571872098485
- type: nauc_precision_at_3_std
value: 2.637465881798806
- type: nauc_precision_at_5_diff1
value: 14.960326184287629
- type: nauc_precision_at_5_max
value: 14.264819044499205
- type: nauc_precision_at_5_std
value: 4.5445140864787215
- type: nauc_recall_at_1000_diff1
value: 11.322486975456016
- type: nauc_recall_at_1000_max
value: 42.74305283200241
- type: nauc_recall_at_1000_std
value: 47.78794764298061
- type: nauc_recall_at_100_diff1
value: 12.242221079259041
- type: nauc_recall_at_100_max
value: 26.918744103646013
- type: nauc_recall_at_100_std
value: 24.541980019505186
- type: nauc_recall_at_10_diff1
value: 13.38045827515169
- type: nauc_recall_at_10_max
value: 18.545456163809533
- type: nauc_recall_at_10_std
value: 5.734945625849404
- type: nauc_recall_at_1_diff1
value: 22.792911666175435
- type: nauc_recall_at_1_max
value: 9.420966909430586
- type: nauc_recall_at_1_std
value: -2.177707391834426
- type: nauc_recall_at_20_diff1
value: 14.133329746281683
- type: nauc_recall_at_20_max
value: 20.394153554260118
- type: nauc_recall_at_20_std
value: 9.229321407977622
- type: nauc_recall_at_3_diff1
value: 18.230047011254864
- type: nauc_recall_at_3_max
value: 12.217461047044784
- type: nauc_recall_at_3_std
value: 1.0395060720237228
- type: nauc_recall_at_5_diff1
value: 14.947190921163273
- type: nauc_recall_at_5_max
value: 13.844816353548604
- type: nauc_recall_at_5_std
value: 2.9621844586841086
- type: ndcg_at_1
value: 9.733
- type: ndcg_at_10
value: 19.543
- type: ndcg_at_100
value: 25.965
- type: ndcg_at_1000
value: 28.663
- type: ndcg_at_20
value: 21.985
- type: ndcg_at_3
value: 14.308000000000002
- type: ndcg_at_5
value: 16.771
- type: precision_at_1
value: 9.733
- type: precision_at_10
value: 3.7249999999999996
- type: precision_at_100
value: 0.739
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 2.4330000000000003
- type: precision_at_3
value: 6.856
- type: precision_at_5
value: 5.475
- type: recall_at_1
value: 8.413
- type: recall_at_10
value: 31.668000000000003
- type: recall_at_100
value: 61.551
- type: recall_at_1000
value: 82.228
- type: recall_at_20
value: 40.888999999999996
- type: recall_at_3
value: 17.669
- type: recall_at_5
value: 23.488999999999997
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 80.598
- type: map_at_1
value: 63.532
- type: map_at_10
value: 76.07300000000001
- type: map_at_100
value: 76.863
- type: map_at_1000
value: 76.896
- type: map_at_20
value: 76.575
- type: map_at_3
value: 73.075
- type: map_at_5
value: 74.888
- type: mrr_at_1
value: 73.11
- type: mrr_at_10
value: 80.13760714285678
- type: mrr_at_100
value: 80.40676931635143
- type: mrr_at_1000
value: 80.413857041773
- type: mrr_at_20
value: 80.33569450368124
- type: mrr_at_3
value: 78.73166666666627
- type: mrr_at_5
value: 79.60316666666607
- type: nauc_map_at_1000_diff1
value: 71.76748518946404
- type: nauc_map_at_1000_max
value: 37.52091562623074
- type: nauc_map_at_1000_std
value: -19.886772833711106
- type: nauc_map_at_100_diff1
value: 71.77392469494623
- type: nauc_map_at_100_max
value: 37.51305402355471
- type: nauc_map_at_100_std
value: -19.90950133564633
- type: nauc_map_at_10_diff1
value: 71.78435718469383
- type: nauc_map_at_10_max
value: 37.12859151143304
- type: nauc_map_at_10_std
value: -20.6727975668906
- type: nauc_map_at_1_diff1
value: 74.16329762399023
- type: nauc_map_at_1_max
value: 30.710315707498864
- type: nauc_map_at_1_std
value: -19.3193474040897
- type: nauc_map_at_20_diff1
value: 71.8048608565351
- type: nauc_map_at_20_max
value: 37.437936254957336
- type: nauc_map_at_20_std
value: -20.256332267213164
- type: nauc_map_at_3_diff1
value: 72.15934361454754
- type: nauc_map_at_3_max
value: 35.34630080626579
- type: nauc_map_at_3_std
value: -22.03571060362441
- type: nauc_map_at_5_diff1
value: 71.83699898564598
- type: nauc_map_at_5_max
value: 36.479498983192975
- type: nauc_map_at_5_std
value: -21.231304270451062
- type: nauc_mrr_at_1000_diff1
value: 72.88897169606878
- type: nauc_mrr_at_1000_max
value: 40.200221349285634
- type: nauc_mrr_at_1000_std
value: -17.633375591506123
- type: nauc_mrr_at_100_diff1
value: 72.88918562563104
- type: nauc_mrr_at_100_max
value: 40.20508375617468
- type: nauc_mrr_at_100_std
value: -17.62754237516005
- type: nauc_mrr_at_10_diff1
value: 72.78722143722388
- type: nauc_mrr_at_10_max
value: 40.26493516347653
- type: nauc_mrr_at_10_std
value: -17.591516046092213
- type: nauc_mrr_at_1_diff1
value: 74.20323111992924
- type: nauc_mrr_at_1_max
value: 39.1888925247388
- type: nauc_mrr_at_1_std
value: -17.041083591080856
- type: nauc_mrr_at_20_diff1
value: 72.87614719969847
- type: nauc_mrr_at_20_max
value: 40.25187245577547
- type: nauc_mrr_at_20_std
value: -17.623643078270213
- type: nauc_mrr_at_3_diff1
value: 72.70424133205663
- type: nauc_mrr_at_3_max
value: 40.015103745774944
- type: nauc_mrr_at_3_std
value: -18.296912082298693
- type: nauc_mrr_at_5_diff1
value: 72.6695462203408
- type: nauc_mrr_at_5_max
value: 40.166677547198724
- type: nauc_mrr_at_5_std
value: -17.836669429879553
- type: nauc_ndcg_at_1000_diff1
value: 71.7014600627096
- type: nauc_ndcg_at_1000_max
value: 39.17528447849729
- type: nauc_ndcg_at_1000_std
value: -18.169144412803025
- type: nauc_ndcg_at_100_diff1
value: 71.72812292491562
- type: nauc_ndcg_at_100_max
value: 39.178065817466866
- type: nauc_ndcg_at_100_std
value: -17.98857148420824
- type: nauc_ndcg_at_10_diff1
value: 71.22490342106018
- type: nauc_ndcg_at_10_max
value: 38.58976910658222
- type: nauc_ndcg_at_10_std
value: -19.3807889122846
- type: nauc_ndcg_at_1_diff1
value: 74.20323111992924
- type: nauc_ndcg_at_1_max
value: 39.18366557965937
- type: nauc_ndcg_at_1_std
value: -16.979563433712343
- type: nauc_ndcg_at_20_diff1
value: 71.59416957115776
- type: nauc_ndcg_at_20_max
value: 39.11048553178983
- type: nauc_ndcg_at_20_std
value: -18.913452979338476
- type: nauc_ndcg_at_3_diff1
value: 71.15596154191027
- type: nauc_ndcg_at_3_max
value: 37.36564154714553
- type: nauc_ndcg_at_3_std
value: -20.721815190390565
- type: nauc_ndcg_at_5_diff1
value: 71.0047395584928
- type: nauc_ndcg_at_5_max
value: 37.95479899642812
- type: nauc_ndcg_at_5_std
value: -20.008045920279887
- type: nauc_precision_at_1000_diff1
value: -36.79287717727177
- type: nauc_precision_at_1000_max
value: -4.853042765778535
- type: nauc_precision_at_1000_std
value: 21.89700327903914
- type: nauc_precision_at_100_diff1
value: -33.803566917391024
- type: nauc_precision_at_100_max
value: -2.343501157957199
- type: nauc_precision_at_100_std
value: 21.03134251148425
- type: nauc_precision_at_10_diff1
value: -19.647078935128047
- type: nauc_precision_at_10_max
value: 7.646163968592671
- type: nauc_precision_at_10_std
value: 11.425640109742039
- type: nauc_precision_at_1_diff1
value: 74.20323111992924
- type: nauc_precision_at_1_max
value: 39.18366557965937
- type: nauc_precision_at_1_std
value: -16.979563433712343
- type: nauc_precision_at_20_diff1
value: -26.95360783576433
- type: nauc_precision_at_20_max
value: 3.534889652498316
- type: nauc_precision_at_20_std
value: 16.011941126119197
- type: nauc_precision_at_3_diff1
value: 7.80806721613657
- type: nauc_precision_at_3_max
value: 18.93471456458755
- type: nauc_precision_at_3_std
value: -2.3471793824170493
- type: nauc_precision_at_5_diff1
value: -7.187077136844068
- type: nauc_precision_at_5_max
value: 13.710196203710806
- type: nauc_precision_at_5_std
value: 5.029517000064198
- type: nauc_recall_at_1000_diff1
value: 55.29138658386572
- type: nauc_recall_at_1000_max
value: 57.58368141138265
- type: nauc_recall_at_1000_std
value: 33.353499745829765
- type: nauc_recall_at_100_diff1
value: 65.98407378542676
- type: nauc_recall_at_100_max
value: 43.3437006049648
- type: nauc_recall_at_100_std
value: 3.7556643837275345
- type: nauc_recall_at_10_diff1
value: 64.73552843826317
- type: nauc_recall_at_10_max
value: 37.93061567923699
- type: nauc_recall_at_10_std
value: -19.1098323242707
- type: nauc_recall_at_1_diff1
value: 74.16329762399023
- type: nauc_recall_at_1_max
value: 30.710315707498864
- type: nauc_recall_at_1_std
value: -19.3193474040897
- type: nauc_recall_at_20_diff1
value: 64.4507396763554
- type: nauc_recall_at_20_max
value: 40.62914458603293
- type: nauc_recall_at_20_std
value: -15.040711675139082
- type: nauc_recall_at_3_diff1
value: 67.8143518137102
- type: nauc_recall_at_3_max
value: 33.649275891159945
- type: nauc_recall_at_3_std
value: -24.400275123272163
- type: nauc_recall_at_5_diff1
value: 65.9405683463817
- type: nauc_recall_at_5_max
value: 35.64051201738537
- type: nauc_recall_at_5_std
value: -22.06335424061329
- type: ndcg_at_1
value: 73.11
- type: ndcg_at_10
value: 80.598
- type: ndcg_at_100
value: 82.75200000000001
- type: ndcg_at_1000
value: 83.145
- type: ndcg_at_20
value: 81.71300000000001
- type: ndcg_at_3
value: 77.025
- type: ndcg_at_5
value: 78.85
- type: precision_at_1
value: 73.11
- type: precision_at_10
value: 12.206999999999999
- type: precision_at_100
value: 1.459
- type: precision_at_1000
value: 0.155
- type: precision_at_20
value: 6.579
- type: precision_at_3
value: 33.36
- type: precision_at_5
value: 22.09
- type: recall_at_1
value: 63.532
- type: recall_at_10
value: 89.32600000000001
- type: recall_at_100
value: 97.35000000000001
- type: recall_at_1000
value: 99.613
- type: recall_at_20
value: 93.151
- type: recall_at_3
value: 79.074
- type: recall_at_5
value: 84.143
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 39.5465127563479
- type: v_measure
value: 39.5465127563479
- type: v_measure_std
value: 5.038703300031419
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 47.07911795189491
- type: v_measure
value: 47.07911795189491
- type: v_measure_std
value: 11.546436135362846
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 12.386999999999999
- type: map_at_1
value: 3.053
- type: map_at_10
value: 6.912999999999999
- type: map_at_100
value: 8.261000000000001
- type: map_at_1000
value: 8.530999999999999
- type: map_at_20
value: 7.566000000000001
- type: map_at_3
value: 5.094
- type: map_at_5
value: 5.997
- type: mrr_at_1
value: 15.0
- type: mrr_at_10
value: 22.795357142857135
- type: mrr_at_100
value: 24.007787966055577
- type: mrr_at_1000
value: 24.09964360060081
- type: mrr_at_20
value: 23.466190383404
- type: mrr_at_3
value: 20.100000000000012
- type: mrr_at_5
value: 21.685000000000006
- type: nauc_map_at_1000_diff1
value: 11.73412101608325
- type: nauc_map_at_1000_max
value: 14.330449150895694
- type: nauc_map_at_1000_std
value: 15.742095990011743
- type: nauc_map_at_100_diff1
value: 11.777038848684697
- type: nauc_map_at_100_max
value: 14.104140826193404
- type: nauc_map_at_100_std
value: 15.155771699462264
- type: nauc_map_at_10_diff1
value: 12.374060330916672
- type: nauc_map_at_10_max
value: 11.856630361520313
- type: nauc_map_at_10_std
value: 11.753665232073269
- type: nauc_map_at_1_diff1
value: 16.986085327339335
- type: nauc_map_at_1_max
value: 12.246255844992572
- type: nauc_map_at_1_std
value: 7.863450169503143
- type: nauc_map_at_20_diff1
value: 11.634858111388464
- type: nauc_map_at_20_max
value: 13.108008262696513
- type: nauc_map_at_20_std
value: 13.423455469499999
- type: nauc_map_at_3_diff1
value: 14.889445454705324
- type: nauc_map_at_3_max
value: 11.572110481390013
- type: nauc_map_at_3_std
value: 8.556136010622351
- type: nauc_map_at_5_diff1
value: 12.907309838627985
- type: nauc_map_at_5_max
value: 11.000220583694968
- type: nauc_map_at_5_std
value: 10.111376166991917
- type: nauc_mrr_at_1000_diff1
value: 14.963874100415397
- type: nauc_mrr_at_1000_max
value: 13.495160823256164
- type: nauc_mrr_at_1000_std
value: 11.28815345444998
- type: nauc_mrr_at_100_diff1
value: 14.97621893176082
- type: nauc_mrr_at_100_max
value: 13.464936280105155
- type: nauc_mrr_at_100_std
value: 11.305521958378108
- type: nauc_mrr_at_10_diff1
value: 14.956869421525884
- type: nauc_mrr_at_10_max
value: 13.425685629657924
- type: nauc_mrr_at_10_std
value: 10.767260180262618
- type: nauc_mrr_at_1_diff1
value: 16.83378691664147
- type: nauc_mrr_at_1_max
value: 12.112287067835906
- type: nauc_mrr_at_1_std
value: 8.418304606390475
- type: nauc_mrr_at_20_diff1
value: 14.917032940839656
- type: nauc_mrr_at_20_max
value: 13.41755983642966
- type: nauc_mrr_at_20_std
value: 11.11458079038555
- type: nauc_mrr_at_3_diff1
value: 15.214496970273089
- type: nauc_mrr_at_3_max
value: 12.165871395179483
- type: nauc_mrr_at_3_std
value: 9.980162064503286
- type: nauc_mrr_at_5_diff1
value: 14.835204244776087
- type: nauc_mrr_at_5_max
value: 12.524956858818742
- type: nauc_mrr_at_5_std
value: 10.099655249800849
- type: nauc_ndcg_at_1000_diff1
value: 10.764737128236437
- type: nauc_ndcg_at_1000_max
value: 18.3469700109834
- type: nauc_ndcg_at_1000_std
value: 23.22837765426608
- type: nauc_ndcg_at_100_diff1
value: 11.606245579895573
- type: nauc_ndcg_at_100_max
value: 17.167157579603412
- type: nauc_ndcg_at_100_std
value: 20.347909657378473
- type: nauc_ndcg_at_10_diff1
value: 12.394040285590439
- type: nauc_ndcg_at_10_max
value: 13.388439287974505
- type: nauc_ndcg_at_10_std
value: 13.188024533529397
- type: nauc_ndcg_at_1_diff1
value: 16.83378691664147
- type: nauc_ndcg_at_1_max
value: 12.112287067835906
- type: nauc_ndcg_at_1_std
value: 8.418304606390475
- type: nauc_ndcg_at_20_diff1
value: 11.212784095325706
- type: nauc_ndcg_at_20_max
value: 15.185332617097233
- type: nauc_ndcg_at_20_std
value: 16.087050160363443
- type: nauc_ndcg_at_3_diff1
value: 14.708471591387005
- type: nauc_ndcg_at_3_max
value: 11.70756510699363
- type: nauc_ndcg_at_3_std
value: 9.658612404132116
- type: nauc_ndcg_at_5_diff1
value: 13.123868466784149
- type: nauc_ndcg_at_5_max
value: 11.60382600862464
- type: nauc_ndcg_at_5_std
value: 10.625775061954277
- type: nauc_precision_at_1000_diff1
value: 3.608251418490512
- type: nauc_precision_at_1000_max
value: 20.501537930519582
- type: nauc_precision_at_1000_std
value: 34.4770607840569
- type: nauc_precision_at_100_diff1
value: 7.864853652134883
- type: nauc_precision_at_100_max
value: 19.894334894038547
- type: nauc_precision_at_100_std
value: 28.711783183330663
- type: nauc_precision_at_10_diff1
value: 9.605214553552692
- type: nauc_precision_at_10_max
value: 14.347596155123817
- type: nauc_precision_at_10_std
value: 16.242794843380032
- type: nauc_precision_at_1_diff1
value: 16.83378691664147
- type: nauc_precision_at_1_max
value: 12.112287067835906
- type: nauc_precision_at_1_std
value: 8.418304606390475
- type: nauc_precision_at_20_diff1
value: 6.9964985542924545
- type: nauc_precision_at_20_max
value: 17.275243538199216
- type: nauc_precision_at_20_std
value: 20.986245055691036
- type: nauc_precision_at_3_diff1
value: 13.995705983866177
- type: nauc_precision_at_3_max
value: 11.391320470301181
- type: nauc_precision_at_3_std
value: 10.151716783634907
- type: nauc_precision_at_5_diff1
value: 11.064867165700008
- type: nauc_precision_at_5_max
value: 10.965289810519257
- type: nauc_precision_at_5_std
value: 11.837752544253021
- type: nauc_recall_at_1000_diff1
value: 3.4118402840027118
- type: nauc_recall_at_1000_max
value: 21.505334337938027
- type: nauc_recall_at_1000_std
value: 34.87205826061254
- type: nauc_recall_at_100_diff1
value: 7.793188645900735
- type: nauc_recall_at_100_max
value: 20.09269964020807
- type: nauc_recall_at_100_std
value: 28.838050639358375
- type: nauc_recall_at_10_diff1
value: 10.010288074812564
- type: nauc_recall_at_10_max
value: 14.470333599080465
- type: nauc_recall_at_10_std
value: 16.106977670704044
- type: nauc_recall_at_1_diff1
value: 16.986085327339335
- type: nauc_recall_at_1_max
value: 12.246255844992572
- type: nauc_recall_at_1_std
value: 7.863450169503143
- type: nauc_recall_at_20_diff1
value: 7.248991485381231
- type: nauc_recall_at_20_max
value: 17.357162157871585
- type: nauc_recall_at_20_std
value: 20.916649810908385
- type: nauc_recall_at_3_diff1
value: 14.190312777099356
- type: nauc_recall_at_3_max
value: 11.494013846579504
- type: nauc_recall_at_3_std
value: 9.871734511413411
- type: nauc_recall_at_5_diff1
value: 11.369318015463497
- type: nauc_recall_at_5_max
value: 11.0867249382338
- type: nauc_recall_at_5_std
value: 11.565786080587733
- type: ndcg_at_1
value: 15.0
- type: ndcg_at_10
value: 12.386999999999999
- type: ndcg_at_100
value: 18.533
- type: ndcg_at_1000
value: 23.955000000000002
- type: ndcg_at_20
value: 14.459
- type: ndcg_at_3
value: 11.75
- type: ndcg_at_5
value: 10.285
- type: precision_at_1
value: 15.0
- type: precision_at_10
value: 6.36
- type: precision_at_100
value: 1.528
- type: precision_at_1000
value: 0.28300000000000003
- type: precision_at_20
value: 4.375
- type: precision_at_3
value: 10.767
- type: precision_at_5
value: 8.9
- type: recall_at_1
value: 3.053
- type: recall_at_10
value: 12.873000000000001
- type: recall_at_100
value: 30.982
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_20
value: 17.718
- type: recall_at_3
value: 6.553000000000001
- type: recall_at_5
value: 9.013
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 75.67336823619708
- type: cosine_spearman
value: 64.6753400763881
- type: euclidean_pearson
value: 69.13481550039579
- type: euclidean_spearman
value: 64.6752133161514
- type: main_score
value: 64.6753400763881
- type: manhattan_pearson
value: 69.01619023671678
- type: manhattan_spearman
value: 64.8728231074179
- type: pearson
value: 75.67336823619708
- type: spearman
value: 64.6753400763881
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 72.06681927996405
- type: cosine_spearman
value: 62.248985055530525
- type: euclidean_pearson
value: 68.05815981894538
- type: euclidean_spearman
value: 62.248985055530525
- type: main_score
value: 62.248985055530525
- type: manhattan_pearson
value: 66.68543185400786
- type: manhattan_spearman
value: 61.43850654925033
- type: pearson
value: 72.06681927996405
- type: spearman
value: 62.248985055530525
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 76.53983680018591
- type: cosine_spearman
value: 77.27600787572996
- type: euclidean_pearson
value: 76.77960647262235
- type: euclidean_spearman
value: 77.27600787572996
- type: main_score
value: 77.27600787572996
- type: manhattan_pearson
value: 76.37651436440808
- type: manhattan_spearman
value: 76.85568457177312
- type: pearson
value: 76.53983680018591
- type: spearman
value: 77.27600787572996
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 76.20854411766629
- type: cosine_spearman
value: 71.914099628002
- type: euclidean_pearson
value: 74.5273047891339
- type: euclidean_spearman
value: 71.914099628002
- type: main_score
value: 71.914099628002
- type: manhattan_pearson
value: 74.53275458017302
- type: manhattan_spearman
value: 71.9720930787841
- type: pearson
value: 76.20854411766629
- type: spearman
value: 71.914099628002
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 79.24273419832653
- type: cosine_spearman
value: 79.75345871163103
- type: euclidean_pearson
value: 79.31395801169265
- type: euclidean_spearman
value: 79.75345871163103
- type: main_score
value: 79.75345871163103
- type: manhattan_pearson
value: 79.24199238927697
- type: manhattan_spearman
value: 79.64058599210834
- type: pearson
value: 79.24273419832653
- type: spearman
value: 79.75345871163103
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 75.64452330127995
- type: cosine_spearman
value: 76.26343823222666
- type: euclidean_pearson
value: 75.64112047932008
- type: euclidean_spearman
value: 76.26343823222666
- type: main_score
value: 76.26343823222666
- type: manhattan_pearson
value: 75.32718809126764
- type: manhattan_spearman
value: 75.9420892784719
- type: pearson
value: 75.64452330127995
- type: spearman
value: 76.26343823222666
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 17.52217310066287
- type: cosine_spearman
value: 14.729958484232528
- type: euclidean_pearson
value: 17.507234354096582
- type: euclidean_spearman
value: 14.729958484232528
- type: main_score
value: 14.729958484232528
- type: manhattan_pearson
value: 15.286020788097272
- type: manhattan_spearman
value: 11.320242312589713
- type: pearson
value: 17.52217310066287
- type: spearman
value: 14.729958484232528
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 84.67406984717113
- type: cosine_spearman
value: 85.96709815630739
- type: euclidean_pearson
value: 84.7186375682207
- type: euclidean_spearman
value: 85.96709815630739
- type: main_score
value: 85.96709815630739
- type: manhattan_pearson
value: 85.07894758059129
- type: manhattan_spearman
value: 86.57110045700985
- type: pearson
value: 84.67406984717113
- type: spearman
value: 85.96709815630739
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 36.02331692863771
- type: cosine_spearman
value: 34.28540470062557
- type: euclidean_pearson
value: 35.996881386631514
- type: euclidean_spearman
value: 34.28540470062557
- type: main_score
value: 34.28540470062557
- type: manhattan_pearson
value: 35.47246063445784
- type: manhattan_spearman
value: 34.83247787211397
- type: pearson
value: 36.02331692863771
- type: spearman
value: 34.28540470062557
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 13.925983981770388
- type: cosine_spearman
value: 11.193291331109325
- type: euclidean_pearson
value: 13.9151651239108
- type: euclidean_spearman
value: 11.193291331109325
- type: main_score
value: 11.193291331109325
- type: manhattan_pearson
value: 12.652407957594654
- type: manhattan_spearman
value: 9.888358907769014
- type: pearson
value: 13.925983981770388
- type: spearman
value: 11.193291331109325
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 26.77839285232968
- type: cosine_spearman
value: 23.010015986939717
- type: euclidean_pearson
value: 27.13668235790385
- type: euclidean_spearman
value: 23.010015986939717
- type: main_score
value: 23.010015986939717
- type: manhattan_pearson
value: 27.02698710744775
- type: manhattan_spearman
value: 23.038730409304936
- type: pearson
value: 26.77839285232968
- type: spearman
value: 23.010015986939717
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 25.330935194314364
- type: cosine_spearman
value: 23.143555348782797
- type: euclidean_pearson
value: 24.670147594978143
- type: euclidean_spearman
value: 23.143555348782797
- type: main_score
value: 23.143555348782797
- type: manhattan_pearson
value: 24.879695698914418
- type: manhattan_spearman
value: 25.916630507885134
- type: pearson
value: 25.330935194314364
- type: spearman
value: 23.143555348782797
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 6.61651078645899
- type: cosine_spearman
value: 5.415104433010482
- type: euclidean_pearson
value: 6.791575957480809
- type: euclidean_spearman
value: 5.415104433010482
- type: main_score
value: 5.415104433010482
- type: manhattan_pearson
value: 3.6585407382250987
- type: manhattan_spearman
value: 4.566044103659472
- type: pearson
value: 6.61651078645899
- type: spearman
value: 5.415104433010482
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 32.718045784523184
- type: cosine_spearman
value: 27.52844368619317
- type: euclidean_pearson
value: 32.98978359596458
- type: euclidean_spearman
value: 27.52844368619317
- type: main_score
value: 27.52844368619317
- type: manhattan_pearson
value: 35.57923949366344
- type: manhattan_spearman
value: 34.27137422651138
- type: pearson
value: 32.718045784523184
- type: spearman
value: 27.52844368619317
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 9.98410299881163
- type: cosine_spearman
value: 10.98684405086525
- type: euclidean_pearson
value: 9.461680781495218
- type: euclidean_spearman
value: 10.9925413190658
- type: main_score
value: 10.98684405086525
- type: manhattan_pearson
value: 9.442055271895944
- type: manhattan_spearman
value: 11.226101908391069
- type: pearson
value: 9.98410299881163
- type: spearman
value: 10.98684405086525
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 59.3180680265132
- type: cosine_spearman
value: 63.07956002739231
- type: euclidean_pearson
value: 62.46424835000928
- type: euclidean_spearman
value: 63.07956002739231
- type: main_score
value: 63.07956002739231
- type: manhattan_pearson
value: 62.048137683643766
- type: manhattan_spearman
value: 61.83898606879604
- type: pearson
value: 59.3180680265132
- type: spearman
value: 63.07956002739231
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 29.061215770374826
- type: cosine_spearman
value: 36.21441725938738
- type: euclidean_pearson
value: 28.44045530150387
- type: euclidean_spearman
value: 36.21441725938738
- type: main_score
value: 36.21441725938738
- type: manhattan_pearson
value: 29.32403221599612
- type: manhattan_spearman
value: 38.914481153396494
- type: pearson
value: 29.061215770374826
- type: spearman
value: 36.21441725938738
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 11.266385865086239
- type: cosine_spearman
value: 17.291293843893733
- type: euclidean_pearson
value: 10.045897285683115
- type: euclidean_spearman
value: 17.321323804048646
- type: main_score
value: 17.291293843893733
- type: manhattan_pearson
value: 15.333482209624194
- type: manhattan_spearman
value: 20.399166731513915
- type: pearson
value: 11.266385865086239
- type: spearman
value: 17.291293843893733
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 9.647587208410648
- type: cosine_spearman
value: 21.33739699413266
- type: euclidean_pearson
value: 7.451981822243237
- type: euclidean_spearman
value: 21.33739699413266
- type: main_score
value: 21.33739699413266
- type: manhattan_pearson
value: 10.05280275870948
- type: manhattan_spearman
value: 22.233400969472218
- type: pearson
value: 9.647587208410648
- type: spearman
value: 21.33739699413266
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 77.2598255013409
- type: cosine_spearman
value: 75.40519061413276
- type: euclidean_pearson
value: 77.19878276657876
- type: euclidean_spearman
value: 75.40519061413276
- type: main_score
value: 75.40519061413276
- type: manhattan_pearson
value: 77.04099640594512
- type: manhattan_spearman
value: 75.32219501493076
- type: pearson
value: 77.2598255013409
- type: spearman
value: 75.40519061413276
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 72.10127087089839
- type: map
value: 72.10127087089839
- type: mrr
value: 90.62288020621355
- type: nAUC_map_diff1
value: 8.726677558277695
- type: nAUC_map_max
value: 54.59636736704295
- type: nAUC_map_std
value: 67.36367052533402
- type: nAUC_mrr_diff1
value: 47.77588337162405
- type: nAUC_mrr_max
value: 74.90946175462605
- type: nAUC_mrr_std
value: 71.81332269641806
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 50.63999999999999
- type: map_at_1
value: 35.5
- type: map_at_10
value: 45.238
- type: map_at_100
value: 46.135999999999996
- type: map_at_1000
value: 46.181
- type: map_at_20
value: 45.767
- type: map_at_3
value: 42.329
- type: map_at_5
value: 44.054
- type: mrr_at_1
value: 37.666666666666664
- type: mrr_at_10
value: 46.6611111111111
- type: mrr_at_100
value: 47.37819687814183
- type: mrr_at_1000
value: 47.417644921595766
- type: mrr_at_20
value: 47.06856780130773
- type: mrr_at_3
value: 43.94444444444443
- type: mrr_at_5
value: 45.52777777777777
- type: nauc_map_at_1000_diff1
value: 52.83081390161976
- type: nauc_map_at_1000_max
value: 37.21621852995913
- type: nauc_map_at_1000_std
value: -3.416369626271914
- type: nauc_map_at_100_diff1
value: 52.823502489139884
- type: nauc_map_at_100_max
value: 37.2435733087758
- type: nauc_map_at_100_std
value: -3.376708460074628
- type: nauc_map_at_10_diff1
value: 52.495695868970785
- type: nauc_map_at_10_max
value: 36.79244353087952
- type: nauc_map_at_10_std
value: -3.998841918813238
- type: nauc_map_at_1_diff1
value: 55.20714819661926
- type: nauc_map_at_1_max
value: 33.68583272500883
- type: nauc_map_at_1_std
value: -7.806502386166579
- type: nauc_map_at_20_diff1
value: 52.82557233788675
- type: nauc_map_at_20_max
value: 37.02532534485883
- type: nauc_map_at_20_std
value: -3.6962702134516126
- type: nauc_map_at_3_diff1
value: 53.005833884053054
- type: nauc_map_at_3_max
value: 35.102473883265056
- type: nauc_map_at_3_std
value: -6.237364868462919
- type: nauc_map_at_5_diff1
value: 52.67151253564545
- type: nauc_map_at_5_max
value: 36.083416260083574
- type: nauc_map_at_5_std
value: -4.7023113318143785
- type: nauc_mrr_at_1000_diff1
value: 52.938698102997094
- type: nauc_mrr_at_1000_max
value: 39.46705187537523
- type: nauc_mrr_at_1000_std
value: 0.6163818152860598
- type: nauc_mrr_at_100_diff1
value: 52.93491193041612
- type: nauc_mrr_at_100_max
value: 39.490426719059165
- type: nauc_mrr_at_100_std
value: 0.6662007971949842
- type: nauc_mrr_at_10_diff1
value: 52.70216069864656
- type: nauc_mrr_at_10_max
value: 39.52193808791504
- type: nauc_mrr_at_10_std
value: 0.536595037291294
- type: nauc_mrr_at_1_diff1
value: 55.77100806609076
- type: nauc_mrr_at_1_max
value: 37.966164940491446
- type: nauc_mrr_at_1_std
value: -2.1074234936282537
- type: nauc_mrr_at_20_diff1
value: 52.942136130524986
- type: nauc_mrr_at_20_max
value: 39.42716448302782
- type: nauc_mrr_at_20_std
value: 0.5472281187662744
- type: nauc_mrr_at_3_diff1
value: 53.144295072591206
- type: nauc_mrr_at_3_max
value: 38.05294316134295
- type: nauc_mrr_at_3_std
value: -1.2360608664776096
- type: nauc_mrr_at_5_diff1
value: 52.789220500594205
- type: nauc_mrr_at_5_max
value: 38.83395427252616
- type: nauc_mrr_at_5_std
value: -0.09099470685601964
- type: nauc_ndcg_at_1000_diff1
value: 52.16867590195915
- type: nauc_ndcg_at_1000_max
value: 39.70115643730131
- type: nauc_ndcg_at_1000_std
value: 0.904258507053096
- type: nauc_ndcg_at_100_diff1
value: 51.87328245345757
- type: nauc_ndcg_at_100_max
value: 40.59055338026654
- type: nauc_ndcg_at_100_std
value: 2.554356951645788
- type: nauc_ndcg_at_10_diff1
value: 50.809281234563805
- type: nauc_ndcg_at_10_max
value: 39.085094925973245
- type: nauc_ndcg_at_10_std
value: -0.23387754671232033
- type: nauc_ndcg_at_1_diff1
value: 55.77100806609076
- type: nauc_ndcg_at_1_max
value: 37.966164940491446
- type: nauc_ndcg_at_1_std
value: -2.1074234936282537
- type: nauc_ndcg_at_20_diff1
value: 51.74864887078553
- type: nauc_ndcg_at_20_max
value: 39.32033115509482
- type: nauc_ndcg_at_20_std
value: 0.4346356935494506
- type: nauc_ndcg_at_3_diff1
value: 51.9909705702443
- type: nauc_ndcg_at_3_max
value: 36.078476037019094
- type: nauc_ndcg_at_3_std
value: -4.014502363911228
- type: nauc_ndcg_at_5_diff1
value: 51.312788955634325
- type: nauc_ndcg_at_5_max
value: 37.54290824294073
- type: nauc_ndcg_at_5_std
value: -1.8169251273098448
- type: nauc_precision_at_1000_diff1
value: 1.4596703970072096
- type: nauc_precision_at_1000_max
value: 36.408552907408
- type: nauc_precision_at_1000_std
value: 53.892991905053776
- type: nauc_precision_at_100_diff1
value: 17.90829681479967
- type: nauc_precision_at_100_max
value: 50.02058762977557
- type: nauc_precision_at_100_std
value: 50.95242296795188
- type: nauc_precision_at_10_diff1
value: 33.69533492770854
- type: nauc_precision_at_10_max
value: 47.554637845938025
- type: nauc_precision_at_10_std
value: 21.812883074791838
- type: nauc_precision_at_1_diff1
value: 55.77100806609076
- type: nauc_precision_at_1_max
value: 37.966164940491446
- type: nauc_precision_at_1_std
value: -2.1074234936282537
- type: nauc_precision_at_20_diff1
value: 31.797703948512723
- type: nauc_precision_at_20_max
value: 46.94077230822751
- type: nauc_precision_at_20_std
value: 29.525569664289396
- type: nauc_precision_at_3_diff1
value: 41.753151429999456
- type: nauc_precision_at_3_max
value: 38.30163209243931
- type: nauc_precision_at_3_std
value: 6.19935377482869
- type: nauc_precision_at_5_diff1
value: 38.479320931912575
- type: nauc_precision_at_5_max
value: 41.576866734894516
- type: nauc_precision_at_5_std
value: 13.327714566652604
- type: nauc_recall_at_1000_diff1
value: 50.28923446773287
- type: nauc_recall_at_1000_max
value: 68.29528746364413
- type: nauc_recall_at_1000_std
value: 48.2313231806132
- type: nauc_recall_at_100_diff1
value: 46.22085619290839
- type: nauc_recall_at_100_max
value: 61.60933703216747
- type: nauc_recall_at_100_std
value: 42.210649980610896
- type: nauc_recall_at_10_diff1
value: 43.10485234893865
- type: nauc_recall_at_10_max
value: 43.06779802776641
- type: nauc_recall_at_10_std
value: 8.272818985431385
- type: nauc_recall_at_1_diff1
value: 55.20714819661926
- type: nauc_recall_at_1_max
value: 33.68583272500883
- type: nauc_recall_at_1_std
value: -7.806502386166579
- type: nauc_recall_at_20_diff1
value: 46.850902149595036
- type: nauc_recall_at_20_max
value: 44.58623368637416
- type: nauc_recall_at_20_std
value: 11.890054420031708
- type: nauc_recall_at_3_diff1
value: 48.80301236823221
- type: nauc_recall_at_3_max
value: 34.177890037375
- type: nauc_recall_at_3_std
value: -3.852215004054359
- type: nauc_recall_at_5_diff1
value: 46.206941308622056
- type: nauc_recall_at_5_max
value: 38.61994260176494
- type: nauc_recall_at_5_std
value: 2.735469769782116
- type: ndcg_at_1
value: 37.667
- type: ndcg_at_10
value: 50.63999999999999
- type: ndcg_at_100
value: 54.885
- type: ndcg_at_1000
value: 56.274
- type: ndcg_at_20
value: 52.349000000000004
- type: ndcg_at_3
value: 44.891999999999996
- type: ndcg_at_5
value: 47.788000000000004
- type: precision_at_1
value: 37.667
- type: precision_at_10
value: 7.3
- type: precision_at_100
value: 0.97
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 4.067
- type: precision_at_3
value: 18.333
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 35.5
- type: recall_at_10
value: 66.178
- type: recall_at_100
value: 85.9
- type: recall_at_1000
value: 97.1
- type: recall_at_20
value: 72.60600000000001
- type: recall_at_3
value: 50.306
- type: recall_at_5
value: 57.443999999999996
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.71386138613862
- type: cosine_accuracy_threshold
value: 78.56961662426235
- type: cosine_ap
value: 90.20131927652946
- type: cosine_f1
value: 84.7749114820435
- type: cosine_f1_threshold
value: 75.7768544371973
- type: cosine_precision
value: 85.7727737973388
- type: cosine_recall
value: 83.8
- type: dot_accuracy
value: 99.71386138613862
- type: dot_accuracy_threshold
value: 78.56961780669964
- type: dot_ap
value: 90.20131927652946
- type: dot_f1
value: 84.7749114820435
- type: dot_f1_threshold
value: 75.77685228378391
- type: dot_precision
value: 85.7727737973388
- type: dot_recall
value: 83.8
- type: euclidean_accuracy
value: 99.71386138613862
- type: euclidean_accuracy_threshold
value: 65.46813529720524
- type: euclidean_ap
value: 90.20131927652946
- type: euclidean_f1
value: 84.7749114820435
- type: euclidean_f1_threshold
value: 69.60336608830053
- type: euclidean_precision
value: 85.7727737973388
- type: euclidean_recall
value: 83.8
- type: main_score
value: 90.20131927652946
- type: manhattan_accuracy
value: 99.7059405940594
- type: manhattan_accuracy_threshold
value: 804.8100425289704
- type: manhattan_ap
value: 90.00682250828237
- type: manhattan_f1
value: 84.44211629125196
- type: manhattan_f1_threshold
value: 828.8486447498144
- type: manhattan_precision
value: 88.66886688668868
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.71386138613862
- type: max_ap
value: 90.20131927652946
- type: max_f1
value: 84.7749114820435
- type: max_precision
value: 88.66886688668868
- type: max_recall
value: 83.8
- type: similarity_accuracy
value: 99.71386138613862
- type: similarity_accuracy_threshold
value: 78.56961662426235
- type: similarity_ap
value: 90.20131927652946
- type: similarity_f1
value: 84.7749114820435
- type: similarity_f1_threshold
value: 75.7768544371973
- type: similarity_precision
value: 85.7727737973388
- type: similarity_recall
value: 83.8
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 48.18939518021159
- type: v_measure
value: 48.18939518021159
- type: v_measure_std
value: 4.6189444340187995
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 30.743938802421265
- type: v_measure
value: 30.743938802421265
- type: v_measure_std
value: 1.4645401677053824
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 43.254152892780986
- type: map
value: 43.254152892780986
- type: mrr
value: 43.70483989050165
- type: nAUC_map_diff1
value: 33.22453777168869
- type: nAUC_map_max
value: 13.175366935671228
- type: nAUC_map_std
value: 3.718253924398536
- type: nAUC_mrr_diff1
value: 32.58818809467491
- type: nAUC_mrr_max
value: 14.093758435205075
- type: nAUC_mrr_std
value: 4.198791420159734
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 29.88360050203766
- type: cosine_spearman
value: 29.275185932109494
- type: dot_pearson
value: 29.883597746108975
- type: dot_spearman
value: 29.28377974870949
- type: main_score
value: 29.275185932109494
- type: pearson
value: 29.88360050203766
- type: spearman
value: 29.275185932109494
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 45.747
- type: map_at_1
value: 0.148
- type: map_at_10
value: 0.972
- type: map_at_100
value: 4.652
- type: map_at_1000
value: 11.511000000000001
- type: map_at_20
value: 1.643
- type: map_at_3
value: 0.369
- type: map_at_5
value: 0.561
- type: mrr_at_1
value: 62.0
- type: mrr_at_10
value: 70.06904761904761
- type: mrr_at_100
value: 70.45500059672992
- type: mrr_at_1000
value: 70.45500059672992
- type: mrr_at_20
value: 70.31716791979949
- type: mrr_at_3
value: 68.0
- type: mrr_at_5
value: 69.19999999999999
- type: nauc_map_at_1000_diff1
value: -0.8266899821302324
- type: nauc_map_at_1000_max
value: 34.62914536640893
- type: nauc_map_at_1000_std
value: 57.177693387251615
- type: nauc_map_at_100_diff1
value: -3.3097934383165613
- type: nauc_map_at_100_max
value: 22.052336613600293
- type: nauc_map_at_100_std
value: 29.905360060478188
- type: nauc_map_at_10_diff1
value: 6.057035481050755
- type: nauc_map_at_10_max
value: 22.742824418774667
- type: nauc_map_at_10_std
value: 5.649441588476496
- type: nauc_map_at_1_diff1
value: 10.469485578180873
- type: nauc_map_at_1_max
value: 4.582098501050435
- type: nauc_map_at_1_std
value: -10.47482550446343
- type: nauc_map_at_20_diff1
value: 1.5813367839245727
- type: nauc_map_at_20_max
value: 25.09380802651507
- type: nauc_map_at_20_std
value: 11.733045886140895
- type: nauc_map_at_3_diff1
value: -0.4174848325628528
- type: nauc_map_at_3_max
value: 16.54291715633098
- type: nauc_map_at_3_std
value: -6.315368365719176
- type: nauc_map_at_5_diff1
value: 1.6439114449809122
- type: nauc_map_at_5_max
value: 18.119472468345634
- type: nauc_map_at_5_std
value: -1.4642215840068935
- type: nauc_mrr_at_1000_diff1
value: 19.962304210632194
- type: nauc_mrr_at_1000_max
value: 28.66281052259736
- type: nauc_mrr_at_1000_std
value: 14.4833499197582
- type: nauc_mrr_at_100_diff1
value: 19.962304210632194
- type: nauc_mrr_at_100_max
value: 28.66281052259736
- type: nauc_mrr_at_100_std
value: 14.4833499197582
- type: nauc_mrr_at_10_diff1
value: 19.79498540271038
- type: nauc_mrr_at_10_max
value: 28.07551011390951
- type: nauc_mrr_at_10_std
value: 13.820791565247939
- type: nauc_mrr_at_1_diff1
value: 23.72088730271045
- type: nauc_mrr_at_1_max
value: 29.338830261821947
- type: nauc_mrr_at_1_std
value: 10.463649509276033
- type: nauc_mrr_at_20_diff1
value: 20.06776286940325
- type: nauc_mrr_at_20_max
value: 28.69272909781133
- type: nauc_mrr_at_20_std
value: 14.560673636667628
- type: nauc_mrr_at_3_diff1
value: 18.71166001912622
- type: nauc_mrr_at_3_max
value: 30.645161290322555
- type: nauc_mrr_at_3_std
value: 16.37394164159257
- type: nauc_mrr_at_5_diff1
value: 15.791374902745353
- type: nauc_mrr_at_5_max
value: 28.51602708149093
- type: nauc_mrr_at_5_std
value: 15.246386476651619
- type: nauc_ndcg_at_1000_diff1
value: -5.179304837164554
- type: nauc_ndcg_at_1000_max
value: 27.27301986190763
- type: nauc_ndcg_at_1000_std
value: 49.239144813886654
- type: nauc_ndcg_at_100_diff1
value: 7.283019925558149
- type: nauc_ndcg_at_100_max
value: 29.80340187562149
- type: nauc_ndcg_at_100_std
value: 47.60799676958296
- type: nauc_ndcg_at_10_diff1
value: 11.621471677557253
- type: nauc_ndcg_at_10_max
value: 31.78727749460396
- type: nauc_ndcg_at_10_std
value: 26.339328462146177
- type: nauc_ndcg_at_1_diff1
value: 26.896384303421446
- type: nauc_ndcg_at_1_max
value: 28.727080596332872
- type: nauc_ndcg_at_1_std
value: 12.10515793682523
- type: nauc_ndcg_at_20_diff1
value: 7.253524538786647
- type: nauc_ndcg_at_20_max
value: 33.412855576178295
- type: nauc_ndcg_at_20_std
value: 34.10895211064073
- type: nauc_ndcg_at_3_diff1
value: 11.303112239393863
- type: nauc_ndcg_at_3_max
value: 35.0880605283756
- type: nauc_ndcg_at_3_std
value: 18.514877130637803
- type: nauc_ndcg_at_5_diff1
value: 8.537541001217583
- type: nauc_ndcg_at_5_max
value: 32.24796400964019
- type: nauc_ndcg_at_5_std
value: 21.65596013895985
- type: nauc_precision_at_1000_diff1
value: 5.217123572202896
- type: nauc_precision_at_1000_max
value: 31.954154167309177
- type: nauc_precision_at_1000_std
value: 60.51613061301686
- type: nauc_precision_at_100_diff1
value: 5.748688865778208
- type: nauc_precision_at_100_max
value: 28.503515028630567
- type: nauc_precision_at_100_std
value: 52.8175811950368
- type: nauc_precision_at_10_diff1
value: 9.634424129349284
- type: nauc_precision_at_10_max
value: 33.90210630229416
- type: nauc_precision_at_10_std
value: 30.197787312348073
- type: nauc_precision_at_1_diff1
value: 23.72088730271045
- type: nauc_precision_at_1_max
value: 29.338830261821947
- type: nauc_precision_at_1_std
value: 10.463649509276033
- type: nauc_precision_at_20_diff1
value: 2.6440820197838923
- type: nauc_precision_at_20_max
value: 36.6927642980172
- type: nauc_precision_at_20_std
value: 40.53918258763216
- type: nauc_precision_at_3_diff1
value: 2.9773659425793695
- type: nauc_precision_at_3_max
value: 35.63522203655881
- type: nauc_precision_at_3_std
value: 17.365942579371055
- type: nauc_precision_at_5_diff1
value: 3.883249981522982
- type: nauc_precision_at_5_max
value: 34.19785174053362
- type: nauc_precision_at_5_std
value: 25.391096548495977
- type: nauc_recall_at_1000_diff1
value: -10.977265624215267
- type: nauc_recall_at_1000_max
value: 22.349720150932985
- type: nauc_recall_at_1000_std
value: 47.14118127199015
- type: nauc_recall_at_100_diff1
value: -10.566105105889243
- type: nauc_recall_at_100_max
value: 13.59897332326766
- type: nauc_recall_at_100_std
value: 25.1260269383207
- type: nauc_recall_at_10_diff1
value: 3.9418824014124514
- type: nauc_recall_at_10_max
value: 18.87305117920693
- type: nauc_recall_at_10_std
value: 4.227456274746917
- type: nauc_recall_at_1_diff1
value: 10.469485578180873
- type: nauc_recall_at_1_max
value: 4.582098501050435
- type: nauc_recall_at_1_std
value: -10.47482550446343
- type: nauc_recall_at_20_diff1
value: -3.663384950691917
- type: nauc_recall_at_20_max
value: 20.838703493064635
- type: nauc_recall_at_20_std
value: 10.729793670370862
- type: nauc_recall_at_3_diff1
value: -1.1850402683856456
- type: nauc_recall_at_3_max
value: 16.033671610288522
- type: nauc_recall_at_3_std
value: -6.953520529126048
- type: nauc_recall_at_5_diff1
value: -0.5156927662191768
- type: nauc_recall_at_5_max
value: 15.556954479927315
- type: nauc_recall_at_5_std
value: -2.965229848389009
- type: ndcg_at_1
value: 56.00000000000001
- type: ndcg_at_10
value: 45.747
- type: ndcg_at_100
value: 32.761
- type: ndcg_at_1000
value: 29.633
- type: ndcg_at_20
value: 42.905
- type: ndcg_at_3
value: 50.641999999999996
- type: ndcg_at_5
value: 48.231
- type: precision_at_1
value: 62.0
- type: precision_at_10
value: 47.8
- type: precision_at_100
value: 33.72
- type: precision_at_1000
value: 14.238000000000001
- type: precision_at_20
value: 45.2
- type: precision_at_3
value: 54.0
- type: precision_at_5
value: 50.8
- type: recall_at_1
value: 0.148
- type: recall_at_10
value: 1.143
- type: recall_at_100
value: 7.219
- type: recall_at_1000
value: 28.294999999999998
- type: recall_at_20
value: 2.083
- type: recall_at_3
value: 0.395
- type: recall_at_5
value: 0.628
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 18.618000000000002
- type: map_at_1
value: 1.22
- type: map_at_10
value: 6.635000000000001
- type: map_at_100
value: 10.873
- type: map_at_1000
value: 12.415
- type: map_at_20
value: 8.334
- type: map_at_3
value: 2.8240000000000003
- type: map_at_5
value: 4.111
- type: mrr_at_1
value: 14.285714285714285
- type: mrr_at_10
value: 31.959831551668284
- type: mrr_at_100
value: 33.15059576942869
- type: mrr_at_1000
value: 33.15059576942869
- type: mrr_at_20
value: 32.685999641281754
- type: mrr_at_3
value: 25.850340136054424
- type: mrr_at_5
value: 29.31972789115646
- type: nauc_map_at_1000_diff1
value: 8.820920087157313
- type: nauc_map_at_1000_max
value: -33.58280072902863
- type: nauc_map_at_1000_std
value: -22.730292551065183
- type: nauc_map_at_100_diff1
value: 9.741008911531535
- type: nauc_map_at_100_max
value: -33.6532837418042
- type: nauc_map_at_100_std
value: -28.3444309192652
- type: nauc_map_at_10_diff1
value: 7.657150877271815
- type: nauc_map_at_10_max
value: -41.7412362957407
- type: nauc_map_at_10_std
value: -35.66062824513052
- type: nauc_map_at_1_diff1
value: 7.593190069621649
- type: nauc_map_at_1_max
value: -39.58442010649443
- type: nauc_map_at_1_std
value: -22.564719811889777
- type: nauc_map_at_20_diff1
value: 7.245303325270055
- type: nauc_map_at_20_max
value: -37.804327180430946
- type: nauc_map_at_20_std
value: -32.702756826489846
- type: nauc_map_at_3_diff1
value: 6.742365189818029
- type: nauc_map_at_3_max
value: -41.7228290771728
- type: nauc_map_at_3_std
value: -30.230168338925107
- type: nauc_map_at_5_diff1
value: 11.935913888588882
- type: nauc_map_at_5_max
value: -41.39335754887243
- type: nauc_map_at_5_std
value: -33.780157609546535
- type: nauc_mrr_at_1000_diff1
value: -1.6708159098532442
- type: nauc_mrr_at_1000_max
value: -36.55890935351506
- type: nauc_mrr_at_1000_std
value: -24.27343264470873
- type: nauc_mrr_at_100_diff1
value: -1.6708159098532442
- type: nauc_mrr_at_100_max
value: -36.55890935351506
- type: nauc_mrr_at_100_std
value: -24.27343264470873
- type: nauc_mrr_at_10_diff1
value: -0.42650070974468685
- type: nauc_mrr_at_10_max
value: -37.09244916127389
- type: nauc_mrr_at_10_std
value: -24.66093983608399
- type: nauc_mrr_at_1_diff1
value: -5.630573652147252
- type: nauc_mrr_at_1_max
value: -33.616658797870684
- type: nauc_mrr_at_1_std
value: -23.601564115907
- type: nauc_mrr_at_20_diff1
value: -1.832519847770416
- type: nauc_mrr_at_20_max
value: -37.12461848720876
- type: nauc_mrr_at_20_std
value: -24.697864546344437
- type: nauc_mrr_at_3_diff1
value: -0.005683436651441496
- type: nauc_mrr_at_3_max
value: -32.50516010446863
- type: nauc_mrr_at_3_std
value: -21.544877233050823
- type: nauc_mrr_at_5_diff1
value: -2.354001730958692
- type: nauc_mrr_at_5_max
value: -32.51899298268129
- type: nauc_mrr_at_5_std
value: -23.68035252143919
- type: nauc_ndcg_at_1000_diff1
value: 14.007950932108976
- type: nauc_ndcg_at_1000_max
value: -31.274257790464837
- type: nauc_ndcg_at_1000_std
value: 3.658749568249879
- type: nauc_ndcg_at_100_diff1
value: 13.626007116136158
- type: nauc_ndcg_at_100_max
value: -35.59107319590088
- type: nauc_ndcg_at_100_std
value: -18.874707006492024
- type: nauc_ndcg_at_10_diff1
value: 9.82558048538336
- type: nauc_ndcg_at_10_max
value: -39.51461465840459
- type: nauc_ndcg_at_10_std
value: -30.33405672804229
- type: nauc_ndcg_at_1_diff1
value: -1.598770159246464
- type: nauc_ndcg_at_1_max
value: -31.975857803575675
- type: nauc_ndcg_at_1_std
value: -18.993368614347663
- type: nauc_ndcg_at_20_diff1
value: 11.616460882964375
- type: nauc_ndcg_at_20_max
value: -36.68867443298684
- type: nauc_ndcg_at_20_std
value: -27.831158282067598
- type: nauc_ndcg_at_3_diff1
value: 3.6760483719742556
- type: nauc_ndcg_at_3_max
value: -30.935030030092992
- type: nauc_ndcg_at_3_std
value: -18.717891674270643
- type: nauc_ndcg_at_5_diff1
value: 10.773599917143413
- type: nauc_ndcg_at_5_max
value: -31.08451038101287
- type: nauc_ndcg_at_5_std
value: -25.478457258577336
- type: nauc_precision_at_1000_diff1
value: -6.780225586359699
- type: nauc_precision_at_1000_max
value: 38.71975790762798
- type: nauc_precision_at_1000_std
value: 57.8083677042306
- type: nauc_precision_at_100_diff1
value: 2.959136061872892
- type: nauc_precision_at_100_max
value: -8.27764507575222
- type: nauc_precision_at_100_std
value: 5.742410187313611
- type: nauc_precision_at_10_diff1
value: 9.882789695687109
- type: nauc_precision_at_10_max
value: -31.486245698037102
- type: nauc_precision_at_10_std
value: -29.081919554833874
- type: nauc_precision_at_1_diff1
value: -5.630573652147252
- type: nauc_precision_at_1_max
value: -33.616658797870684
- type: nauc_precision_at_1_std
value: -23.601564115907
- type: nauc_precision_at_20_diff1
value: 5.165999913921455
- type: nauc_precision_at_20_max
value: -19.322665087378923
- type: nauc_precision_at_20_std
value: -19.841805142598865
- type: nauc_precision_at_3_diff1
value: 2.846740832419061
- type: nauc_precision_at_3_max
value: -30.76562032864513
- type: nauc_precision_at_3_std
value: -23.610192672373636
- type: nauc_precision_at_5_diff1
value: 13.83881140180208
- type: nauc_precision_at_5_max
value: -23.40672207825652
- type: nauc_precision_at_5_std
value: -26.803291207458884
- type: nauc_recall_at_1000_diff1
value: 5.989093134294799
- type: nauc_recall_at_1000_max
value: -23.01810906637643
- type: nauc_recall_at_1000_std
value: 51.72967782759332
- type: nauc_recall_at_100_diff1
value: 9.279568158025599
- type: nauc_recall_at_100_max
value: -32.49225165397591
- type: nauc_recall_at_100_std
value: -14.266931753931292
- type: nauc_recall_at_10_diff1
value: 8.789441102892894
- type: nauc_recall_at_10_max
value: -41.575759675933185
- type: nauc_recall_at_10_std
value: -36.066608504981836
- type: nauc_recall_at_1_diff1
value: 7.593190069621649
- type: nauc_recall_at_1_max
value: -39.58442010649443
- type: nauc_recall_at_1_std
value: -22.564719811889777
- type: nauc_recall_at_20_diff1
value: 7.288095720364289
- type: nauc_recall_at_20_max
value: -34.19747470428325
- type: nauc_recall_at_20_std
value: -29.334755464530023
- type: nauc_recall_at_3_diff1
value: 7.541743741210702
- type: nauc_recall_at_3_max
value: -38.357726279072416
- type: nauc_recall_at_3_std
value: -29.877869977138204
- type: nauc_recall_at_5_diff1
value: 11.512545675995455
- type: nauc_recall_at_5_max
value: -37.366204857623586
- type: nauc_recall_at_5_std
value: -33.58926486109219
- type: ndcg_at_1
value: 12.245000000000001
- type: ndcg_at_10
value: 18.618000000000002
- type: ndcg_at_100
value: 28.488000000000003
- type: ndcg_at_1000
value: 41.208
- type: ndcg_at_20
value: 19.536
- type: ndcg_at_3
value: 15.045
- type: ndcg_at_5
value: 16.359
- type: precision_at_1
value: 14.285999999999998
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 6.5920000000000005
- type: precision_at_1000
value: 1.471
- type: precision_at_20
value: 15.204
- type: precision_at_3
value: 18.367
- type: precision_at_5
value: 18.776
- type: recall_at_1
value: 1.22
- type: recall_at_10
value: 13.763
- type: recall_at_100
value: 40.107
- type: recall_at_1000
value: 79.06800000000001
- type: recall_at_20
value: 20.049
- type: recall_at_3
value: 4.2540000000000004
- type: recall_at_5
value: 7.142999999999999
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 69.0625
- type: ap
value: 12.429057046174089
- type: ap_weighted
value: 12.429057046174089
- type: f1
value: 52.366056859622454
- type: f1_weighted
value: 75.91632061778698
- type: main_score
value: 69.0625
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 55.387662705149964
- type: f1
value: 55.62292803889264
- type: f1_weighted
value: 55.01561915660653
- type: main_score
value: 55.387662705149964
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 33.535908963951435
- type: v_measure
value: 33.535908963951435
- type: v_measure_std
value: 1.8862804680454297
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 81.57000655659535
- type: cosine_accuracy_threshold
value: 76.01186428039885
- type: cosine_ap
value: 57.187252502171674
- type: cosine_f1
value: 54.94480738905159
- type: cosine_f1_threshold
value: 63.27845286960887
- type: cosine_precision
value: 47.93632075471698
- type: cosine_recall
value: 64.35356200527704
- type: dot_accuracy
value: 81.57000655659535
- type: dot_accuracy_threshold
value: 76.01186510638954
- type: dot_ap
value: 57.1872568788409
- type: dot_f1
value: 54.94480738905159
- type: dot_f1_threshold
value: 63.27845437266042
- type: dot_precision
value: 47.93632075471698
- type: dot_recall
value: 64.35356200527704
- type: euclidean_accuracy
value: 81.57000655659535
- type: euclidean_accuracy_threshold
value: 69.2649048666448
- type: euclidean_ap
value: 57.18724194735979
- type: euclidean_f1
value: 54.94480738905159
- type: euclidean_f1_threshold
value: 85.69894748780587
- type: euclidean_precision
value: 47.93632075471698
- type: euclidean_recall
value: 64.35356200527704
- type: main_score
value: 57.516050924090266
- type: manhattan_accuracy
value: 81.71902008702389
- type: manhattan_accuracy_threshold
value: 856.8997862166725
- type: manhattan_ap
value: 57.516050924090266
- type: manhattan_f1
value: 55.16339869281046
- type: manhattan_f1_threshold
value: 1035.858379830097
- type: manhattan_precision
value: 50.18378378378379
- type: manhattan_recall
value: 61.24010554089709
- type: max_accuracy
value: 81.71902008702389
- type: max_ap
value: 57.516050924090266
- type: max_f1
value: 55.16339869281046
- type: max_precision
value: 50.18378378378379
- type: max_recall
value: 64.35356200527704
- type: similarity_accuracy
value: 81.57000655659535
- type: similarity_accuracy_threshold
value: 76.01186428039885
- type: similarity_ap
value: 57.187252502171674
- type: similarity_f1
value: 54.94480738905159
- type: similarity_f1_threshold
value: 63.27845286960887
- type: similarity_precision
value: 47.93632075471698
- type: similarity_recall
value: 64.35356200527704
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 87.09977878682035
- type: cosine_accuracy_threshold
value: 63.00089389314832
- type: cosine_ap
value: 81.9487582699938
- type: cosine_f1
value: 74.04089724292375
- type: cosine_f1_threshold
value: 56.35024835869245
- type: cosine_precision
value: 70.7599466704091
- type: cosine_recall
value: 77.64089929165382
- type: dot_accuracy
value: 87.09977878682035
- type: dot_accuracy_threshold
value: 63.00089560728222
- type: dot_ap
value: 81.94879514546079
- type: dot_f1
value: 74.04089724292375
- type: dot_f1_threshold
value: 56.350250341728405
- type: dot_precision
value: 70.7599466704091
- type: dot_recall
value: 77.64089929165382
- type: euclidean_accuracy
value: 87.09977878682035
- type: euclidean_accuracy_threshold
value: 86.02221469735642
- type: euclidean_ap
value: 81.94875892553148
- type: euclidean_f1
value: 74.04089724292375
- type: euclidean_f1_threshold
value: 93.43420484744681
- type: euclidean_precision
value: 70.7599466704091
- type: euclidean_recall
value: 77.64089929165382
- type: main_score
value: 82.13756947863085
- type: manhattan_accuracy
value: 87.19292117825125
- type: manhattan_accuracy_threshold
value: 1076.0586285257887
- type: manhattan_ap
value: 82.13756947863085
- type: manhattan_f1
value: 74.36426623424485
- type: manhattan_f1_threshold
value: 1148.366796662276
- type: manhattan_precision
value: 71.32051463311183
- type: manhattan_recall
value: 77.6793963658762
- type: max_accuracy
value: 87.19292117825125
- type: max_ap
value: 82.13756947863085
- type: max_f1
value: 74.36426623424485
- type: max_precision
value: 71.32051463311183
- type: max_recall
value: 77.6793963658762
- type: similarity_accuracy
value: 87.09977878682035
- type: similarity_accuracy_threshold
value: 63.00089389314832
- type: similarity_ap
value: 81.9487582699938
- type: similarity_f1
value: 74.04089724292375
- type: similarity_f1_threshold
value: 56.35024835869245
- type: similarity_precision
value: 70.7599466704091
- type: similarity_recall
value: 77.64089929165382
---
# potion-base-8M Model Card
<div align="center">
<img width="35%" alt="Model2Vec logo" src="https://raw.githubusercontent.com/MinishLab/model2vec/main/assets/images/logo_v2.png">
</div>
This [Model2Vec](https://github.com/MinishLab/model2vec) model is pre-trained using [Tokenlearn](https://github.com/MinishLab/tokenlearn). It is a distilled version of the [baai/bge-base-en-v1.5](https://huggingface.co/baai/bge-base-en-v1.5) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical.
## Installation
Install model2vec using pip:
```
pip install model2vec
```
## Usage
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("minishlab/potion-base-8M")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
## How it works
Model2vec creates a small, static model that outperforms other static embedding models by a large margin on all tasks on [MTEB](https://huggingface.co/spaces/mteb/leaderboard). This model is pre-trained using [Tokenlearn](https://github.com/MinishLab/tokenlearn). It's created using the following steps:
- Distillation: first, a model is distilled from a sentence transformer model using Model2Vec.
- Training data creation: the sentence transformer model is used to create training data by creating mean output embeddings on a large corpus.
- Training: the distilled model is trained on the training data using Tokenlearn.
- Post-training re-regularization: after training, the model is re-regularized by weighting the tokens based on their frequency, applying PCA, and finally applying [SIF weighting](https://openreview.net/pdf?id=SyK00v5xx).
The results for this model can be found on the [Model2Vec results page](https://github.com/MinishLab/model2vec/blob/main/results/README.md).
## Additional Resources
- [All Model2Vec models on the hub](https://huggingface.co/models?library=model2vec)
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
- [Tokenlearn repo](https://github.com/MinishLab/tokenlearn)
- [Model2Vec Results](https://github.com/MinishLab/model2vec/blob/main/results/README.md)
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
## Library Authors
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
## Citation
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```
@software{minishlab2024model2vec,
authors = {Stephan Tulkens, Thomas van Dongen},
title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model},
year = {2024},
url = {https://github.com/MinishLab/model2vec},
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
flax-community/clip-vision-bert-cc12m-60k | flax-community | fill-mask | [
"transformers",
"jax",
"clip-vision-bert",
"fill-mask",
"arxiv:1908.03557",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646 | 1,626 | 10 | 2 | ---
{}
---
# CLIP-Vision-BERT Multilingual Pre-trained Model
Pretrained CLIP-Vision-BERT pre-trained on translated [Conceptual-12M](https://github.com/google-research-datasets/conceptual-12m) image-text pairs using a masked language modeling (MLM) objective. 10M cleaned image-text pairs are translated using [mBART-50 one-to-many model](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) to 2.5M examples each in English, French, German and Spanish. This model is based on the VisualBERT which was introduced in
[this paper](https://arxiv.org/abs/1908.03557) and first released in
[this repository](https://github.com/uclanlp/visualbert). We trained CLIP-Vision-BERT model during community week hosted by Huggingface 🤗 using JAX/Flax.
This checkpoint is pre-trained for 60k steps.
## Model description
CLIP-Vision-BERT is a modified BERT model which takes in visual embeddings from CLIP-Vision transformer and concatenates them with BERT textual embeddings before passing them to the self-attention layers of BERT. This is done for deep cross-modal interaction between the two modes.
## Intended uses & limitations❗️
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks such as visuo-linguistic sequence classification or visual question answering. We used this model to fine-tuned on a multi-translated version of the visual question answering task - [VQA v2](https://visualqa.org/challenge.html). Since Conceptual-12M is a dataset scraped from the internet, it will involve some biases which will also affect all fine-tuned versions of this model.
### How to use❓
You can use this model directly with a pipeline for masked language modeling. You will need to clone the model from [here](https://github.com/gchhablani/multilingual-vqa). An example of usage is shown below:
```python
>>> from torchvision.io import read_image
>>> import numpy as np
>>> import os
>>> from transformers import CLIPProcessor, BertTokenizerFast
>>> from model.flax_clip_vision_bert.modeling_clip_vision_bert import FlaxCLIPVisionBertForMaskedLM
>>> image_path = os.path.join('images/val2014', os.listdir('images/val2014')[0])
>>> img = read_image(image_path)
>>> clip_processor = CLIPProcessor.from_pretrained('openai/clip-vit-base-patch32')
ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy.
>>> clip_outputs = clip_processor(images=img)
>>> clip_outputs['pixel_values'][0] = clip_outputs['pixel_values'][0].transpose(1,2,0) # Need to transpose images as model expected channel last images.
>>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-multilingual-uncased')
>>> model = FlaxCLIPVisionBertForMaskedLM.from_pretrained('flax-community/clip-vision-bert-cc12m-60k')
>>> text = "Three teddy [MASK] in a showcase."
>>> tokens = tokenizer([text], return_tensors="np")
>>> pixel_values = np.concatenate([clip_outputs['pixel_values']])
>>> outputs = model(pixel_values=pixel_values, **tokens)
>>> indices = np.where(tokens['input_ids']==tokenizer.mask_token_id)
>>> preds = outputs.logits[indices][0]
>>> sorted_indices = np.argsort(preds)[::-1] # Get reverse sorted scores
/home/crocoder/anaconda3/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:4615: UserWarning: 'kind' argument to argsort is ignored.
warnings.warn("'kind' argument to argsort is ignored.")
>>> top_5_indices = sorted_indices[:5]
>>> top_5_tokens = tokenizer.convert_ids_to_tokens(top_5_indices)
>>> top_5_scores = preds[top_5_indices]
>>> print(dict(zip(top_5_tokens, top_5_scores)))
{'bears': 19.241959, 'bear': 17.700356, 'animals': 14.368396, 'girls': 14.343797, 'dolls': 14.274415}
```
## Training data 🏋🏻♂️
The CLIP-Vision-BERT model was pre-trained on a translated version of the Conceptual-12m dataset in four languages using mBART-50: English, French, German and Spanish, with 2.5M image-text pairs in each.
The dataset captions and image urls can be downloaded from [flax-community/conceptual-12m-mbart-50-translated](https://huggingface.co/datasets/flax-community/conceptual-12m-mbart-50-multilingual).
## Data Cleaning 🧹
Though the original dataset contains 12M image-text pairs, a lot of the URLs are invalid now, and in some cases, images are corrupt or broken. We remove such examples from our data, which leaves us with approximately 10M image-text pairs.
**Splits**
We used 99% of the 10M examples as a train set, and the remaining ~ 100K examples as our validation set.
## Training procedure 👨🏻💻
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of approximately 110,000. The beginning of a new document is marked with `[CLS]` and the end of one by `[SEP]`
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
The visual embeddings are taken from the CLIP-Vision model and combined with the textual embeddings inside the BERT embedding layer. The padding is done in the middle. Here is an example of what the embeddings look like:
```
[CLS Emb] [Textual Embs] [SEP Emb] [Pad Embs] [Visual Embs]
```
A total length of 128 tokens, including the visual embeddings, is used. The texts are truncated or padded accordingly.
### Pretraining
The checkpoint of the model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores) **8 v3 TPU cores** for 60k steps with a per device batch size of 64 and a max sequence length of 128. The optimizer used is Adafactor with a learning rate of 1e-4, learning rate warmup for 5,000 steps, and linear decay of the learning rate after.
We tracked experiments using TensorBoard. Here is the link to the main dashboard: [CLIP Vision BERT CC12M Pre-training Dashboard](https://huggingface.co/flax-community/multilingual-vqa-pt-ckpts/tensorboard)
#### **Pretraining Results 📊**
The model at this checkpoint reached **eval accuracy of 67.53%** and **with train loss at 1.793 and eval loss at 1.724**.
## Fine Tuning on downstream tasks
We performed fine-tuning on downstream tasks. We used the following datasets for visual question answering:
1. Multilingual of [Visual Question Answering (VQA) v2](https://visualqa.org/challenge.html) - We translated this dataset to the four languages using `Helsinki-NLP` Marian models. The translated data can be found at [flax-community/multilingual-vqa](https://huggingface.co/datasets/flax-community/multilingual-vqa).
The checkpoints for the fine-tuned model on this pre-trained checkpoint can be found [here](https://huggingface.co/flax-community/multilingual-vqa-pt-60k-ft/tensorboard).
The fine-tuned model achieves eval accuracy of 49% on our validation dataset.
## Team Members
- Gunjan Chhablani [@gchhablani](https://hf.co/gchhablani)
- Bhavitvya Malik[@bhavitvyamalik](https://hf.co/bhavitvyamalik)
## Acknowledgements
We thank [Nilakshan Kunananthaseelan](https://huggingface.co/knilakshan20) for helping us whenever he could get a chance. We also thank [Abheesht Sharma](https://huggingface.co/abheesht) for helping in the discussions in the initial phases. [Luke Melas](https://github.com/lukemelas) helped us get the CC-12M data on our TPU-VMs and we are very grateful to him.
This project would not be possible without the help of [Patrick](https://huggingface.co/patrickvonplaten) and [Suraj](https://huggingface.co/valhalla) who met with us frequently and helped review our approach and guided us throughout the project.
Huge thanks to Huggingface 🤗 & Google Jax/Flax team for such a wonderful community week and for answering our queries on the Slack channel, and for providing us with the TPU-VMs.
<img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:large>
| [
"QUESTION_ANSWERING"
] | [
"BEAR"
] | Non_BioNLP |
legalvn/paraphrase-multilingual-MiniLM-L12-v2-166000 | legalvn | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:651725",
"loss:SoftmaxLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,733 | 1,733 | 10 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:651725
- loss:SoftmaxLoss
widget:
- source_sentence: Nguyên tắc áp dụng phụ cấp ưu đãi nghề y tế thế nào?
sentences:
- Chu kỳ kiểm định chất lượng giáo dục nghề nghiệp\n...\n2. Trường hợp cơ sở giáo
dục nghề nghiệp có ngành, nghề trọng điểm; chương trình đào tạo ngành, nghề trọng
điểm; cơ sở giáo dục nghề nghiệp và chương trình đào tạo các ngành, nghề phục
vụ yêu cầu công tác quản lý nhà nước phải thực hiện kiểm định chất lượng giáo
dục nghề nghiệp theo quy định tại điểm d khoản 3 Điều 65 của Luật Giáo dục nghề
nghiệp số 74/2014/QH13 ngày 27 tháng 11 năm 2014 nhưng không đạt tiêu chuẩn kiểm
định chất lượng giáo dục nghề nghiệp thì trong thời hạn 03 năm phải thực hiện
kiểm định lại.
- Vệ sinh môi trường, vệ sinh tòa nhà\n1. Trách nhiệm của các đơn vị, cán bộ, công
chức, viên chức, nhân viên và người lao động trong việc giữ gìn vệ sinh tại nơi
làm việc và khu vực công cộng:\na) Hàng ngày tự vệ sinh sàn nhà, bàn ghế, tủ,
các thiết bị được trang cấp và tổng vệ sinh phòng làm việc vào chiều thứ Sáu hàng
tuần;\nb) Có trách nhiệm thu gom rác thải trong phòng chuyển ra thùng rác đặt
tại các hành lang;\nc) Không đổ nước chè, cà phê, ….. xuống sàn nhà, hành lang,
tường nhà và khu vệ sinh;\nd) Nghiêm cấp hút thuốc lá trong phòng làm việc, phòng
họp, cầu thang máy, cầu thang bộ, tầng hầm;\nđ) Không khạc nhổ, bôi bẩn lên tường,
không vứt rác thải, gạt tàn thuốc lá, đầu lọc thuốc lá xuống sàn nhà và các khu
vực công cộng;\ne) Nghiêm cấm hái hoa, bẻ cành, dẫm lên thảm cỏ, nhổ cây trong
khuôn viên cơ quan.\ng) Nghiêm cấm mang chất độc hại vào cơ quan.\n…
- Nguyên tắc áp dụng\n1. Trường hợp công chức, viên chức chuyên môn y tế thuộc đối
tượng được hưởng các mức phụ cấp ưu đãi theo nghề khác nhau thì được hưởng một
mức phụ cấp ưu đãi theo nghề cao nhất.\n2. Công chức, viên chức đã hưởng phụ cấp
ưu đãi theo nghề quy định tại Thông tư liên tịch số 06/2010/TTLT-BYT-BNV-BTC ngày
22/3/2010 của Bộ Y tế, Bộ Nội vụ, Bộ Tài chính hướng dẫn thực hiện Nghị định số
64/2009/NĐ-CP ngày 30/7/2009 của Chính phủ về chính sách đối với cán bộ, viên
chức y tế công tác ở vùng có điều kiện kinh tế - xã hội đặc biệt khó khăn thì
không hưởng phụ cấp ưu đãi theo nghề quy định tại Thông tư liên tịch này.
- source_sentence: Số lượng thành viên Hội đồng khoa học và đào tạo là bao nhiêu?
sentences:
- 'Cấp Giấy chứng nhận chất lượng an toàn kỹ thuật và bảo vệ môi trường trong sản
xuất, lắp ráp ô tô, rơ moóc và sơ mi rơ moóc\n2.1. Trình tự thực hiện:\na) Nộp
hồ sơ TTHC:\n- Cơ sở sản xuất lập hồ sơ kiểm tra xe cơ giới theo quy định và nộp
đến Cục Đăng kiểm Việt Nam.\nb) Giải quyết TTHC:\n- Cục Đăng kiểm Việt Nam tiếp
nhận và kiểm tra thành phần hồ sơ kiểm tra xe cơ giới: nếu hồ sơ không đầy đủ
theo quy định thì hướng dẫn Cơ sở sản xuất hoàn thiện lại; Nếu hồ sơ đầy đủ theo
quy định thì thống nhất về thời gian và địa điểm thực hiện đánh giá điều kiện
kiểm tra chất lượng sản phẩm tại Cơ sở sản xuất;\n- Cục Đăng kiểm Việt Nam tiến
hành kiểm tra nội dung hồ sơ và thực hiện đánh giá điều kiện kiểm tra chất lượng
sản phẩm tại Cơ sở sản xuất theo quy định: Nếu chưa đạt yêu cầu thì thông báo
để Cơ sở sản xuất hoàn thiện lại; Nếu đạt yêu cầu thì cấp Giấy chứng nhận trong
thời hạn 03 ngày làm việc kể từ ngày kết thúc kiểm tra, đánh giá hồ sơ đầy đủ,
hợp lệ theo quy định và có kết quả đánh giá COP đạt yêu cầu;\n- Cơ sở sản xuất
nộp hồ sơ kiểm tra xe cơ giới và nhận kết quả trực tiếp tại trụ sở Cục Đăng kiểm
Việt Nam hoặc qua hệ thống bưu chính hoặc qua hệ thống dịch vụ công trực tuyến
hoặc qua hình thức phù hợp khác.\n...'
- Phiên họp Hội đồng khoa học\n1. Hội đồng khoa học họp định kỳ 06 tháng/01 lần.
Các phiên họp định kỳ phải có ít nhất 2/3 tổng số thành viên của Hội đồng khoa
học tham dự.\n2. Phiên họp đột xuất của Hội đồng khoa học được triệu tập theo
quyết định của Chủ tịch và phải có trên 1/2 số thành viên của Hội đồng khoa học
tham dự.\n3. Viện trưởng VKSND tối cao tham dự phiên họp của Hội đồng khoa học
khi thấy cần thiết.\n4. Tùy thuộc vào nội dung chương trình phiên họp, Chủ tịch
Hội đồng khoa học có thể quyết định mời các nhà khoa học trong và ngoài ngành
KSND tham gia phiên họp.\n5. Nội dung phiên họp, các tài liệu liên quan đến phiên
họp của Hội đồng khoa học phải được thông báo hoặc chuyển cho các Thành viên chậm
nhất là 3 ngày làm việc trước ngày họp, trừ trường hợp đột xuất.\n6. Hội đồng
khoa học thảo luận dân chủ, tập thể, công khai, quyết định theo đa số về những
vấn đề thuộc nội dung phiên họp và những vấn đề do Chủ tịch Hội đồng khoa học
nêu ra hoặc do các Thành viên đề nghị và được Chủ tịch Hội đồng khoa học chấp
thuận.\nChủ tịch Hội đồng khoa học chủ trì thảo luận và kết luận tại phiên họp.
Đối với những vấn đề phức tạp còn nhiều ý kiến khác nhau, Hội đồng khoa học tiến
hành biểu quyết. Những vấn đề được biểu quyết đạt trên 2/3 số phiếu của thành
viên có mặt hoặc trên 50% tổng số thành viên Hội đồng được coi là ý kiến chính
thức của Hội đồng khoa học. Các ý kiến khác được bảo lưu, ghi vào biên bản cuộc
họp.
- Hồ sơ, thủ tục công nhận liệt sĩ\n1. Người khi hy sinh đang thuộc quân đội, công
an quản lý thì Bộ Quốc phòng, Bộ Công an chịu trách nhiệm:\na) Hướng dẫn về quy
trình lập hồ sơ đề nghị công nhận liệt sĩ theo quy định.\nb) Có văn bản đề nghị
kèm hồ sơ gửi Bộ Lao động - Thương binh và Xã hội thẩm định trong thời gian không
quá 50 ngày kể từ ngày cơ quan, đơn vị trực tiếp quản lý người hy sinh xác lập,
hoàn thiện các giấy tờ quy định tại Điều 17 Nghị định này.
- source_sentence: Ban Tài chính Văn phòng Kiểm toán nhà nước thực hiện những chức
năng gì?
sentences:
- 'Tiếp nhận hồ sơ và trả kết quả\n...\n2.2.4. Lao động nam hoặc người chồng của
lao động nữ mang thai hộ nghỉ việc khi vợ sinh con: Bản sao giấy chứng sinh hoặc
bản sao giấy khai sinh hoặc trích lục khai sinh của con; trường hợp sinh con phải
phẫu thuật hoặc sinh con dưới 32 tuần tuổi mà giấy chứng sinh không thể hiện thì
có thêm giấy tờ của cơ sở khám bệnh, chữa bệnh thể hiện việc sinh con phải phẫu
thuật, sinh con dưới 32 tuần tuổi. Trường hợp con chết sau khi sinh mà chưa được
cấp giấy chứng sinh thì thay bằng trích sao hoặc tóm tắt hồ sơ bệnh án hoặc giấy
ra viện của người mẹ hoặc của lao động nữ mang thai hộ thể hiện con chết…'
- Việc tự giám sát chất lượng dịch vụ viễn thông của doanh nghiệp viễn thông\n1.
Các doanh nghiệp viễn thông được Bộ Thông tin và Truyền thông cấp giấy phép kinh
doanh dịch vụ viễn thông phải thường xuyên tự giám sát chất lượng dịch vụ đối
với tất cả các dịch vụ thuộc “Danh mục dịch vụ viễn thông bắt buộc quản lý chất
lượng” mà mình cung cấp.\n2. Trong trường hợp dịch vụ mà mình cung cấp có sự cố
thì doanh nghiệp viễn thông phải thực hiện báo cáo đột xuất như quy định tại Khoản
3 Điều 8 của Thông tư này.
- Cục Quản lý, giám sát bảo hiểm; Cục Quản lý Công sản; Cục Quản lý Giá; Cục Quản
lý Nợ và Tài chính đối ngoại; Cục Quản lý, giám sát Kế toán, Kiểm toán; Cục Quản
lý Công sản; Cục Tài chính doanh nghiệp và Vụ Tài chính ngân hàng chủ trì phối
hợp với Cục Tin học & Thống kê Tài chính xây dựng quy trình điện tử từng thủ tục
hành chính theo phạm vi quản lý đối với danh mục thủ tục hành chính để thực hiện
tích hợp trên Hệ thống thông tin Một cửa điện tử của Bộ Tài chính.
- source_sentence: Điều kiện để Giám đốc Học viện An ninh nhân dân được thăng cấp
bậc hàm trước thời hạn như thế nào?
sentences:
- Mức độ tự chủ và trách nhiệm\n- Có ý thức và tác phong nghề nghiệp đúng chuẩn
mực, có năng lực thực hiện công việc được giao; phương pháp làm việc khoa học,
biết phân tích và giải quyết các vấn đề mới về lĩnh vực chuyên môn nghề;\n- Gắn
bó nghề nghiệp; nghiêm chỉnh chấp hành quy chế, quy định của cơ quan, doanh nghiệp,
nơi đang công tác với ý thức tổ chức kỉ luật và tinh thần trách nhiệm cao trong
công việc;\n- Lập được các biện pháp an toàn và đảm bảo an toàn, vệ sinh lao động
trong quá trình làm việc; có ý thức trách nhiệm công dân, thái độ và đạo đức nghề
nghiệp đúng đắn, sẵn sàng nhận nhiệm vụ; tự tin, cầu tiến trong công việc; hợp
tác, thân thiện, khiêm tốn trong các mối quan hệ;\n- Tự chịu trách nhiệm về chất
lượng đối với kết quả công việc, sản phẩm do mình đảm nhiệm theo các tiêu chuẩn
và chịu một phần trách nhiệm đối với kết quả công việc, sản phẩm của tổ, nhóm;
- Tổ chức bộ máy\n...\n5. Tổng cục Hải quan có thể biệt phái công chức từ các đơn
vị thuộc và trực thuộc Tổng cục để bổ sung cán bộ chủ chốt, cán bộ kỹ thuật có
năng lực, kinh nghiệm cho Ban Quản lý dự án đầu tư xây dựng chuyên ngành của Tổng
cục Hải quan. Thời hạn biệt phái các công chức không quá 03 năm, trường hợp quá
03 năm mà chưa hoàn thành dự án thì Tổng cục Hải quan xem xét quyết định bổ sung
thời gian biệt phái.\nNhân sự tuyển dụng mới của Ban Quản lý dự án đầu tư xây
dựng chuyên ngành của Tổng cục Hải quan là viên chức hoặc hợp đồng lao động, thực
hiện theo quy định về chế độ tiền lương và các chế độ, chính sách đối với viên
chức và người lao động.\n...
- Biệt phái công chức\n...\n6. Không thực hiện biệt phái công chức nữ đang mang
thai hoặc nuôi con dưới 36 tháng tuổi.
- source_sentence: Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức,
viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?
sentences:
- Nhiệm vụ của giáo viên\n1. Thực hiện nhiệm vụ tổ chức các hoạt động dạy học, giáo
dục theo kế hoạch giáo dục của nhà trường và kế hoạch giáo dục của tổ chuyên môn;
quản lý học sinh trong các hoạt động giáo dục do nhà trường tổ chức; tham gia
các hoạt động chuyên môn; chịu trách nhiệm về chất lượng, hiệu quả giáo dục.\n2.
Trau dồi đạo đức, nêu cao tinh thần trách nhiệm, giữ gìn phẩm chất, danh dự, uy
tín của nhà giáo; gương mẫu trước học sinh; thương yêu, đối xử công bằng và tôn
trọng nhân cách của học sinh; bảo vệ các quyền và lợi ích chính đáng của học sinh;
đoàn kết, giúp đỡ đồng nghiệp.\n3. Học tập, rèn luyện để nâng cao sức khỏe, trình
độ chính trị, chuyên môn, nghiệp vụ, đổi mới phương pháp dạy học, giáo dục.\n4.
Tham gia tập huấn, bồi dưỡng chuyên môn, nghiệp vụ.\n5. Tham gia công tác phổ
cập giáo dục trung học cơ sở ở địa phương.\n6. Thực hiện nghĩa vụ công dân, các
quy định của pháp luật và của ngành Giáo dục, các quyết định của hiệu trưởng;
thực hiện nhiệm vụ do hiệu trưởng phân công, chịu sự kiểm tra, đánh giá của hiệu
trưởng và các cấp quản lý giáo dục.\n7. Phối hợp với Đội Thiếu niên Tiền phong
Hồ Chí Minh, Đoàn Thanh niên Cộng sản Hồ Chí Minh, Hội Liên hiệp Thanh niên Việt
Nam, gia đình học sinh và các tổ chức xã hội liên quan để tổ chức hoạt động giáo
dục.\n8. Thực hiện các nhiệm vụ khác theo quy định của pháp luật.
- “Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong
trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP
ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \n1. Máy
PCR. \n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \n3. Test kít
xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \n4. Máy thở chức năng
cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng
cao, máy thở xách tay. \n5. Máy lọc máu liên tục. \n6. Máy X-Quang di động. \n7.
Máy đo khí máu (đo được điện giải, lactat, hematocrite). \n8. Máy theo dõi bệnh
nhân>5 thông số. \n9. Bơm tiêm điện; Bơm truyền dịch. \n10. Máy phá rung tim có
tạo nhịp. \n11. Máy đo thời gian đông máu. \n12. Máy đo huyết động.”
- Thời điểm đánh giá xếp loại chất lượng hằng năm\n...\n2. Căn cứ tình hình thực
tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống
nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất
lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo
đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\n3. Tại thời điểm đánh giá,
xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ
chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm
làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ
được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá,
xếp loại chất lượng theo quy định của pháp luật và Quy chế này.
---
# SentenceTransformer based on sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision 8d6b950845285729817bf8e1af1861502c2fed0c -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("legalvn/paraphrase-multilingual-MiniLM-L12-v2-166000")
# Run inference
sentences = [
'Thời điểm đánh giá và xếp loại chất lượng hằng năm của công chức, viên chức thuộc Bộ Tài chính được diễn ra trong thời gian nào?',
'Thời điểm đánh giá xếp loại chất lượng hằng năm\\n...\\n2. Căn cứ tình hình thực tiễn của cơ quan, tổ chức, đơn vị, tập thể lãnh đạo cơ quan, tổ chức, đơn vị thống nhất với cấp ủy cùng cấp về việc kết hợp tổ chức cuộc họp đánh giá, xếp loại chất lượng công chức, viên chức và xếp loại đảng viên trong tổ chức, đơn vị mình, bảo đảm nghiêm túc, hiệu quả, tránh hình thức, lãng phí.\\n3. Tại thời điểm đánh giá, xếp loại chất lượng, trường hợp vắng mặt có lý do chính đáng hoặc nghỉ ốm, nghỉ chế độ thai sản theo quy định của pháp luật, công chức, viên chức có trách nhiệm làm báo cáo tại Phiếu đánh giá, xếp loại chất lượng theo chức trách, nhiệm vụ được giao, gửi cơ quan, tổ chức, đơn vị đang công tác để thực hiện việc đánh giá, xếp loại chất lượng theo quy định của pháp luật và Quy chế này.',
'“Điều 1. Danh mục trang thiết bị y tế phục vụ phòng, chống dịch COVID-19 trong trường hợp cấp bách theo quy định tại khoản 3 Điều 29 Nghị định số 98/2021/NĐ-CP ngày 08 tháng 11 năm 2021 của Chính phủ về quản lý trang thiết bị y tế \\n1. Máy PCR. \\n2. Hóa chất (sinh phẩm) chạy máy PCR xét nghiệm SARS-CoV-2. \\n3. Test kít xét nghiệm nhanh kháng nguyên/ kháng thể kháng SARS-CoV-2. \\n4. Máy thở chức năng cao, máy thở xâm nhập và không xâm nhập, máy thở không xâm nhập, máy oxy dòng cao, máy thở xách tay. \\n5. Máy lọc máu liên tục. \\n6. Máy X-Quang di động. \\n7. Máy đo khí máu (đo được điện giải, lactat, hematocrite). \\n8. Máy theo dõi bệnh nhân>5 thông số. \\n9. Bơm tiêm điện; Bơm truyền dịch. \\n10. Máy phá rung tim có tạo nhịp. \\n11. Máy đo thời gian đông máu. \\n12. Máy đo huyết động.”',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 651,725 training samples
* Columns: <code>queries</code>, <code>corpus</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | queries | corpus | score |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 9 tokens</li><li>mean: 24.71 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 121.6 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>0: ~43.80%</li><li>1: ~37.00%</li><li>2: ~19.20%</li></ul> |
* Samples:
| queries | corpus | score |
|:------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Người học ngành quản lý khai thác công trình thủy lợi trình độ cao đẳng phải có khả năng học tập và nâng cao trình độ như thế nào?</code> | <code>Khả năng học tập, nâng cao trình độ\n- Khối lượng khối lượng kiến thức tối thiểu, yêu cầu về năng lực mà người học phải đạt được sau khi tốt nghiệp ngành, nghề Dược trình độ cao đẳng có thể tiếp tục phát triển ở các trình độ cao hơn;\n- Người học sau tốt nghiệp có năng lực tự học, tự cập nhật những tiến bộ khoa học công nghệ trong phạm vi ngành, nghề để nâng cao trình độ hoặc học liên thông lên trình độ cao hơn trong cùng ngành nghề hoặc trong nhóm ngành, nghề hoặc trong cùng lĩnh vực đào tạo.</code> | <code>2</code> |
| <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nTrong phạm vi điều chỉnh của văn bản quy phạm pháp luật:\n1. Xác định nội dung liên quan đến vấn đề bình đẳng giới hoặc vấn đề bất bình đẳng giới, phân biệt đối xử về giới.\n2. Quy định các biện pháp cần thiết để thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới; dự báo tác động của các quy định đó đối với nam và nữ sau khi được ban hành.\n3. Xác định nguồn nhân lực, tài chính cần thiết để triển khai các biện pháp thực hiện bình đẳng giới hoặc để giải quyết vấn đề bất bình đẳng giới, phân biệt đối xử về giới.</code> | <code>2</code> |
| <code>Nội dung lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật được quy định thế nào?</code> | <code>Mục đích lồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật\nLồng ghép vấn đề bình đẳng giới trong xây dựng văn bản quy phạm pháp luật (sau đây gọi tắt là văn bản) là một biện pháp để thực hiện mục tiêu bình đẳng giới, xóa bỏ phân biệt đối xử về giới, bảo đảm quyền, lợi ích hợp pháp, phù hợp với đặc thù của mỗi giới; tạo cơ hội phát triển như nhau cho nam và nữ trong các lĩnh vực của đời sống xã hội và gia đình; bảo đảm bình đẳng giới thực chất giữa nam và nữ.</code> | <code>1</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
### Training Hyperparameters
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3.0
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:------:|:-------------:|
| 0.0061 | 500 | 1.0473 |
| 0.0123 | 1000 | 1.0447 |
| 0.0184 | 1500 | 1.0383 |
| 0.0246 | 2000 | 1.0395 |
| 0.0307 | 2500 | 1.0436 |
| 0.0368 | 3000 | 1.0375 |
| 0.0430 | 3500 | 1.0189 |
| 0.0491 | 4000 | 1.0282 |
| 0.0552 | 4500 | 1.0355 |
| 0.0614 | 5000 | 1.0286 |
| 0.0675 | 5500 | 1.0264 |
| 0.0737 | 6000 | 1.0174 |
| 0.0798 | 6500 | 1.0238 |
| 0.0859 | 7000 | 1.0217 |
| 0.0921 | 7500 | 1.0203 |
| 0.0982 | 8000 | 1.0201 |
| 0.1043 | 8500 | 1.0266 |
| 0.1105 | 9000 | 1.0379 |
| 0.1166 | 9500 | 1.0367 |
| 0.1228 | 10000 | 1.0384 |
| 0.1289 | 10500 | 1.0291 |
| 0.1350 | 11000 | 1.0362 |
| 0.1412 | 11500 | 1.0354 |
| 0.1473 | 12000 | 1.0204 |
| 0.1534 | 12500 | 1.0401 |
| 0.1596 | 13000 | 1.0237 |
| 0.1657 | 13500 | 1.0271 |
| 0.1719 | 14000 | 1.0235 |
| 0.1780 | 14500 | 1.0329 |
| 0.1841 | 15000 | 1.0474 |
| 0.1903 | 15500 | 1.0547 |
| 0.1964 | 16000 | 1.0557 |
| 0.2025 | 16500 | 1.0626 |
| 0.2087 | 17000 | 1.0551 |
| 0.2148 | 17500 | 1.0526 |
| 0.2210 | 18000 | 1.125 |
| 0.2271 | 18500 | 1.2996 |
| 0.2332 | 19000 | 1.0703 |
| 0.2394 | 19500 | 1.0601 |
| 0.2455 | 20000 | 1.0835 |
| 0.2516 | 20500 | 1.0583 |
| 0.2578 | 21000 | 1.141 |
| 0.2639 | 21500 | 1.0802 |
| 0.2701 | 22000 | 1.0589 |
| 0.2762 | 22500 | 1.086 |
| 0.2823 | 23000 | 1.0743 |
| 0.2885 | 23500 | 1.0605 |
| 0.2946 | 24000 | 1.0602 |
| 0.3007 | 24500 | 1.0732 |
| 0.3069 | 25000 | 1.0614 |
| 0.3130 | 25500 | 1.0666 |
| 0.3192 | 26000 | 1.0669 |
| 0.3253 | 26500 | 1.0627 |
| 0.3314 | 27000 | 1.0659 |
| 0.3376 | 27500 | 1.07 |
| 0.3437 | 28000 | 1.0783 |
| 0.3498 | 28500 | 1.078 |
| 0.3560 | 29000 | 1.0832 |
| 0.3621 | 29500 | 1.0695 |
| 0.3683 | 30000 | 1.0714 |
| 0.3744 | 30500 | 1.3794 |
| 0.3805 | 31000 | 1.0838 |
| 0.3867 | 31500 | 1.0541 |
| 0.3928 | 32000 | 1.0799 |
| 0.3989 | 32500 | 1.0622 |
| 0.4051 | 33000 | 1.0597 |
| 0.4112 | 33500 | 1.0731 |
| 0.4174 | 34000 | 1.0871 |
| 0.4235 | 34500 | 1.0535 |
| 0.4296 | 35000 | 1.3215 |
| 0.4358 | 35500 | 1.1501 |
| 0.4419 | 36000 | 1.1088 |
| 0.4480 | 36500 | 1.0844 |
| 0.4542 | 37000 | 1.0981 |
| 0.4603 | 37500 | 1.0856 |
| 0.4665 | 38000 | 1.0956 |
| 0.4726 | 38500 | 1.0813 |
| 0.4787 | 39000 | 1.0843 |
| 0.4849 | 39500 | 1.1053 |
| 0.4910 | 40000 | 1.092 |
| 0.4971 | 40500 | 1.081 |
| 0.5033 | 41000 | 1.0919 |
| 0.5094 | 41500 | 1.0681 |
| 0.5156 | 42000 | 1.0826 |
| 0.5217 | 42500 | 1.0809 |
| 0.5278 | 43000 | 1.093 |
| 0.5340 | 43500 | 1.0709 |
| 0.5401 | 44000 | 1.0623 |
| 0.5462 | 44500 | 1.0801 |
| 0.5524 | 45000 | 1.0833 |
| 0.5585 | 45500 | 1.0816 |
| 0.5647 | 46000 | 1.0697 |
| 0.5708 | 46500 | 1.0864 |
| 0.5769 | 47000 | 1.0744 |
| 0.5831 | 47500 | 1.0897 |
| 0.5892 | 48000 | 1.0727 |
| 0.5953 | 48500 | 1.0621 |
| 0.6015 | 49000 | 1.0582 |
| 0.6076 | 49500 | 1.0681 |
| 0.6138 | 50000 | 1.083 |
| 0.6199 | 50500 | 1.0632 |
| 0.6260 | 51000 | 1.0809 |
| 0.6322 | 51500 | 1.0525 |
| 0.6383 | 52000 | 1.6649 |
| 0.6444 | 52500 | 1.0873 |
| 0.6506 | 53000 | 1.0649 |
| 0.6567 | 53500 | 1.0591 |
| 0.6629 | 54000 | 1.061 |
| 0.6690 | 54500 | 1.0682 |
| 0.6751 | 55000 | 1.0616 |
| 0.6813 | 55500 | 1.0827 |
| 0.6874 | 56000 | 1.0799 |
| 0.6935 | 56500 | 1.0705 |
| 0.6997 | 57000 | 1.0821 |
| 0.7058 | 57500 | 1.0763 |
| 0.7120 | 58000 | 1.0842 |
| 0.7181 | 58500 | 1.0813 |
| 0.7242 | 59000 | 1.0678 |
| 0.7304 | 59500 | 1.0894 |
| 0.7365 | 60000 | 1.0733 |
| 0.7426 | 60500 | 1.0688 |
| 0.7488 | 61000 | 1.0665 |
| 0.7549 | 61500 | 1.0681 |
| 0.7611 | 62000 | 1.301 |
| 0.7672 | 62500 | 1.0907 |
| 0.7733 | 63000 | 1.3941 |
| 0.7795 | 63500 | 1.1355 |
| 0.7856 | 64000 | 1.2196 |
| 0.7917 | 64500 | 1.225 |
| 0.7979 | 65000 | 1.1437 |
| 0.8040 | 65500 | 1.0787 |
| 0.8102 | 66000 | 1.0686 |
| 0.8163 | 66500 | 1.1017 |
| 0.8224 | 67000 | 1.0999 |
| 0.8286 | 67500 | 1.0771 |
| 0.8347 | 68000 | 1.1015 |
| 0.8408 | 68500 | 1.0826 |
| 0.8470 | 69000 | 1.1046 |
| 0.8531 | 69500 | 1.0735 |
| 0.8593 | 70000 | 1.1056 |
| 0.8654 | 70500 | 1.1077 |
| 0.8715 | 71000 | 1.0897 |
| 0.8777 | 71500 | 1.0775 |
| 0.8838 | 72000 | 1.0907 |
| 0.8899 | 72500 | 1.0705 |
| 0.8961 | 73000 | 1.0776 |
| 0.9022 | 73500 | 1.0896 |
| 0.9084 | 74000 | 1.0889 |
| 0.9145 | 74500 | 1.0804 |
| 0.9206 | 75000 | 1.1087 |
| 0.9268 | 75500 | 1.0738 |
| 0.9329 | 76000 | 1.0806 |
| 0.9390 | 76500 | 1.0899 |
| 0.9452 | 77000 | 1.0814 |
| 0.9513 | 77500 | 1.0723 |
| 0.9575 | 78000 | 1.0923 |
| 0.9636 | 78500 | 1.0748 |
| 0.9697 | 79000 | 1.0745 |
| 0.9759 | 79500 | 1.081 |
| 0.9820 | 80000 | 1.08 |
| 0.9881 | 80500 | 1.0905 |
| 0.9943 | 81000 | 1.1064 |
| 1.0004 | 81500 | 1.0929 |
| 1.0066 | 82000 | 1.0815 |
| 1.0127 | 82500 | 1.0768 |
| 1.0188 | 83000 | 1.1004 |
| 1.0250 | 83500 | 1.0835 |
| 1.0311 | 84000 | 1.0765 |
| 1.0372 | 84500 | 1.0906 |
| 1.0434 | 85000 | 1.096 |
| 1.0495 | 85500 | 1.1085 |
| 1.0557 | 86000 | 1.0913 |
| 1.0618 | 86500 | 1.0974 |
| 1.0679 | 87000 | 1.0763 |
| 1.0741 | 87500 | 1.0894 |
| 1.0802 | 88000 | 1.1065 |
| 1.0863 | 88500 | 1.0898 |
| 1.0925 | 89000 | 1.1036 |
| 1.0986 | 89500 | 1.0825 |
| 1.1048 | 90000 | 1.1164 |
| 1.1109 | 90500 | 1.0811 |
| 1.1170 | 91000 | 1.115 |
| 1.1232 | 91500 | 1.1123 |
| 1.1293 | 92000 | 1.0846 |
| 1.1354 | 92500 | 1.0917 |
| 1.1416 | 93000 | 1.0879 |
| 1.1477 | 93500 | 1.0969 |
| 1.1539 | 94000 | 1.0849 |
| 1.1600 | 94500 | 1.0852 |
| 1.1661 | 95000 | 1.0774 |
| 1.1723 | 95500 | 1.0984 |
| 1.1784 | 96000 | 1.0936 |
| 1.1845 | 96500 | 1.0842 |
| 1.1907 | 97000 | 1.0895 |
| 1.1968 | 97500 | 1.09 |
| 1.2030 | 98000 | 1.0813 |
| 1.2091 | 98500 | 1.0965 |
| 1.2152 | 99000 | 1.1017 |
| 1.2214 | 99500 | 1.1045 |
| 1.2275 | 100000 | 1.093 |
| 1.2336 | 100500 | 1.0903 |
| 1.2398 | 101000 | 1.1133 |
| 1.2459 | 101500 | 1.0883 |
| 1.2521 | 102000 | 1.1192 |
| 1.2582 | 102500 | 1.0817 |
| 1.2643 | 103000 | 1.0822 |
| 1.2705 | 103500 | 1.0915 |
| 1.2766 | 104000 | 1.1128 |
| 1.2827 | 104500 | 1.0786 |
| 1.2889 | 105000 | 1.1101 |
| 1.2950 | 105500 | 1.097 |
| 1.3012 | 106000 | 1.095 |
| 1.3073 | 106500 | 1.0884 |
| 1.3134 | 107000 | 1.09 |
| 1.3196 | 107500 | 1.1057 |
| 1.3257 | 108000 | 1.087 |
| 1.3318 | 108500 | 1.1009 |
| 1.3380 | 109000 | 1.0849 |
| 1.3441 | 109500 | 1.0886 |
| 1.3503 | 110000 | 1.0805 |
| 1.3564 | 110500 | 1.0808 |
| 1.3625 | 111000 | 1.1025 |
| 1.3687 | 111500 | 1.0955 |
| 1.3748 | 112000 | 1.0824 |
| 1.3809 | 112500 | 1.0835 |
| 1.3871 | 113000 | 1.1168 |
| 1.3932 | 113500 | 1.0881 |
| 1.3994 | 114000 | 1.0946 |
| 1.4055 | 114500 | 1.0819 |
| 1.4116 | 115000 | 1.1155 |
| 1.4178 | 115500 | 1.1021 |
| 1.4239 | 116000 | 1.102 |
| 1.4300 | 116500 | 1.0733 |
| 1.4362 | 117000 | 1.0987 |
| 1.4423 | 117500 | 1.1103 |
| 1.4485 | 118000 | 1.1034 |
| 1.4546 | 118500 | 1.0987 |
| 1.4607 | 119000 | 1.0908 |
| 1.4669 | 119500 | 1.0986 |
| 1.4730 | 120000 | 1.0988 |
| 1.4791 | 120500 | 1.1023 |
| 1.4853 | 121000 | 1.1013 |
| 1.4914 | 121500 | 1.0896 |
| 1.4976 | 122000 | 1.8455 |
| 1.5037 | 122500 | 1.1155 |
| 1.5098 | 123000 | 1.1502 |
| 1.5160 | 123500 | 1.1183 |
| 1.5221 | 124000 | 1.0958 |
| 1.5282 | 124500 | 1.1098 |
| 1.5344 | 125000 | 1.1021 |
| 1.5405 | 125500 | 1.0912 |
| 1.5467 | 126000 | 1.0961 |
| 1.5528 | 126500 | 1.0858 |
| 1.5589 | 127000 | 1.0784 |
| 1.5651 | 127500 | 1.1112 |
| 1.5712 | 128000 | 1.1067 |
| 1.5773 | 128500 | 1.0986 |
| 1.5835 | 129000 | 1.0824 |
| 1.5896 | 129500 | 1.1072 |
| 1.5958 | 130000 | 1.1098 |
| 1.6019 | 130500 | 1.0962 |
| 1.6080 | 131000 | 1.1108 |
| 1.6142 | 131500 | 1.1187 |
| 1.6203 | 132000 | 1.0923 |
| 1.6264 | 132500 | 1.1003 |
| 1.6326 | 133000 | 1.0865 |
| 1.6387 | 133500 | 1.099 |
| 1.6449 | 134000 | 1.0838 |
| 1.6510 | 134500 | 1.0792 |
| 1.6571 | 135000 | 1.0966 |
| 1.6633 | 135500 | 1.0782 |
| 1.6694 | 136000 | 1.1123 |
| 1.6755 | 136500 | 1.0923 |
| 1.6817 | 137000 | 1.0873 |
| 1.6878 | 137500 | 1.0807 |
| 1.6940 | 138000 | 1.083 |
| 1.7001 | 138500 | 1.0864 |
| 1.7062 | 139000 | 1.0828 |
| 1.7124 | 139500 | 1.0973 |
| 1.7185 | 140000 | 1.1022 |
| 1.7246 | 140500 | 1.0837 |
| 1.7308 | 141000 | 1.0985 |
| 1.7369 | 141500 | 1.1049 |
| 1.7431 | 142000 | 1.079 |
| 1.7492 | 142500 | 1.0757 |
| 1.7553 | 143000 | 1.0808 |
| 1.7615 | 143500 | 1.0743 |
| 1.7676 | 144000 | 1.0933 |
| 1.7737 | 144500 | 1.0938 |
| 1.7799 | 145000 | 1.1121 |
| 1.7860 | 145500 | 1.1138 |
| 1.7922 | 146000 | 1.1063 |
| 1.7983 | 146500 | 1.097 |
| 1.8044 | 147000 | 1.0999 |
| 1.8106 | 147500 | 1.1035 |
| 1.8167 | 148000 | 1.0786 |
| 1.8228 | 148500 | 1.0824 |
| 1.8290 | 149000 | 1.1097 |
| 1.8351 | 149500 | 1.0744 |
| 1.8413 | 150000 | 1.0902 |
| 1.8474 | 150500 | 1.0841 |
| 1.8535 | 151000 | 1.0961 |
| 1.8597 | 151500 | 1.0778 |
| 1.8658 | 152000 | 1.0784 |
| 1.8719 | 152500 | 1.0741 |
| 1.8781 | 153000 | 1.0879 |
| 1.8842 | 153500 | 1.079 |
| 1.8904 | 154000 | 1.0967 |
| 1.8965 | 154500 | 1.0906 |
| 1.9026 | 155000 | 1.0836 |
| 1.9088 | 155500 | 1.0932 |
| 1.9149 | 156000 | 1.0823 |
| 1.9210 | 156500 | 1.087 |
| 1.9272 | 157000 | 1.0892 |
| 1.9333 | 157500 | 1.0842 |
| 1.9395 | 158000 | 1.0837 |
| 1.9456 | 158500 | 1.1001 |
| 1.9517 | 159000 | 1.0727 |
| 1.9579 | 159500 | 1.0875 |
| 1.9640 | 160000 | 1.0845 |
| 1.9701 | 160500 | 1.0805 |
| 1.9763 | 161000 | 1.0825 |
| 1.9824 | 161500 | 1.0886 |
| 1.9886 | 162000 | 1.0856 |
| 1.9947 | 162500 | 1.0816 |
| 2.0008 | 163000 | 1.1005 |
| 2.0070 | 163500 | 1.0775 |
| 2.0131 | 164000 | 1.0875 |
| 2.0192 | 164500 | 1.09 |
| 2.0254 | 165000 | 1.086 |
| 2.0315 | 165500 | 1.087 |
| 2.0377 | 166000 | 1.0815 |
</details>
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.3.1
- Transformers: 4.43.0
- PyTorch: 2.5.0+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"PCR"
] | Non_BioNLP |
katanemo/bge-large-en-v1.5 | katanemo | feature-extraction | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2401.03462",
"arxiv:2312.15503",
"arxiv:2311.13534",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,728 | 1,728 | 14 | 0 | ---
language:
- en
license: mit
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-large-en-v1.5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.8507462686567
- type: ap
value: 38.566457320228245
- type: f1
value: 69.69386648043475
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.416675
- type: ap
value: 89.1928861155922
- type: f1
value: 92.39477019574215
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.175999999999995
- type: f1
value: 47.80712792870253
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.184999999999995
- type: map_at_10
value: 55.654
- type: map_at_100
value: 56.25
- type: map_at_1000
value: 56.255
- type: map_at_3
value: 51.742999999999995
- type: map_at_5
value: 54.129000000000005
- type: mrr_at_1
value: 40.967
- type: mrr_at_10
value: 55.96
- type: mrr_at_100
value: 56.54900000000001
- type: mrr_at_1000
value: 56.554
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.44
- type: ndcg_at_1
value: 40.184999999999995
- type: ndcg_at_10
value: 63.542
- type: ndcg_at_100
value: 65.96499999999999
- type: ndcg_at_1000
value: 66.08699999999999
- type: ndcg_at_3
value: 55.582
- type: ndcg_at_5
value: 59.855000000000004
- type: precision_at_1
value: 40.184999999999995
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.405
- type: recall_at_1
value: 40.184999999999995
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 98.72
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 77.027
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.567077926750066
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.19453389182364
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.46555939623092
- type: mrr
value: 77.82361605768807
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.9554128814735
- type: cos_sim_spearman
value: 84.65373612172036
- type: euclidean_pearson
value: 83.2905059954138
- type: euclidean_spearman
value: 84.52240782811128
- type: manhattan_pearson
value: 82.99533802997436
- type: manhattan_spearman
value: 84.20673798475734
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.78896103896103
- type: f1
value: 87.77189310964883
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.714538337650495
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.90108349284447
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.795
- type: map_at_10
value: 43.669000000000004
- type: map_at_100
value: 45.151
- type: map_at_1000
value: 45.278
- type: map_at_3
value: 40.006
- type: map_at_5
value: 42.059999999999995
- type: mrr_at_1
value: 39.771
- type: mrr_at_10
value: 49.826
- type: mrr_at_100
value: 50.504000000000005
- type: mrr_at_1000
value: 50.549
- type: mrr_at_3
value: 47.115
- type: mrr_at_5
value: 48.832
- type: ndcg_at_1
value: 39.771
- type: ndcg_at_10
value: 50.217999999999996
- type: ndcg_at_100
value: 55.454
- type: ndcg_at_1000
value: 57.37
- type: ndcg_at_3
value: 44.885000000000005
- type: ndcg_at_5
value: 47.419
- type: precision_at_1
value: 39.771
- type: precision_at_10
value: 9.642000000000001
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.268
- type: precision_at_5
value: 15.536
- type: recall_at_1
value: 32.795
- type: recall_at_10
value: 62.580999999999996
- type: recall_at_100
value: 84.438
- type: recall_at_1000
value: 96.492
- type: recall_at_3
value: 47.071000000000005
- type: recall_at_5
value: 54.079
- type: map_at_1
value: 32.671
- type: map_at_10
value: 43.334
- type: map_at_100
value: 44.566
- type: map_at_1000
value: 44.702999999999996
- type: map_at_3
value: 40.343
- type: map_at_5
value: 41.983
- type: mrr_at_1
value: 40.764
- type: mrr_at_10
value: 49.382
- type: mrr_at_100
value: 49.988
- type: mrr_at_1000
value: 50.03300000000001
- type: mrr_at_3
value: 47.293
- type: mrr_at_5
value: 48.51
- type: ndcg_at_1
value: 40.764
- type: ndcg_at_10
value: 49.039
- type: ndcg_at_100
value: 53.259
- type: ndcg_at_1000
value: 55.253
- type: ndcg_at_3
value: 45.091
- type: ndcg_at_5
value: 46.839999999999996
- type: precision_at_1
value: 40.764
- type: precision_at_10
value: 9.191
- type: precision_at_100
value: 1.476
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 21.72
- type: precision_at_5
value: 15.299
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 58.816
- type: recall_at_100
value: 76.654
- type: recall_at_1000
value: 89.05999999999999
- type: recall_at_3
value: 46.743
- type: recall_at_5
value: 51.783
- type: map_at_1
value: 40.328
- type: map_at_10
value: 53.32599999999999
- type: map_at_100
value: 54.37499999999999
- type: map_at_1000
value: 54.429
- type: map_at_3
value: 49.902
- type: map_at_5
value: 52.002
- type: mrr_at_1
value: 46.332
- type: mrr_at_10
value: 56.858
- type: mrr_at_100
value: 57.522
- type: mrr_at_1000
value: 57.54899999999999
- type: mrr_at_3
value: 54.472
- type: mrr_at_5
value: 55.996
- type: ndcg_at_1
value: 46.332
- type: ndcg_at_10
value: 59.313
- type: ndcg_at_100
value: 63.266999999999996
- type: ndcg_at_1000
value: 64.36
- type: ndcg_at_3
value: 53.815000000000005
- type: ndcg_at_5
value: 56.814
- type: precision_at_1
value: 46.332
- type: precision_at_10
value: 9.53
- type: precision_at_100
value: 1.238
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.054000000000002
- type: precision_at_5
value: 16.589000000000002
- type: recall_at_1
value: 40.328
- type: recall_at_10
value: 73.421
- type: recall_at_100
value: 90.059
- type: recall_at_1000
value: 97.81
- type: recall_at_3
value: 59.009
- type: recall_at_5
value: 66.352
- type: map_at_1
value: 27.424
- type: map_at_10
value: 36.332
- type: map_at_100
value: 37.347
- type: map_at_1000
value: 37.422
- type: map_at_3
value: 33.743
- type: map_at_5
value: 35.176
- type: mrr_at_1
value: 29.153000000000002
- type: mrr_at_10
value: 38.233
- type: mrr_at_100
value: 39.109
- type: mrr_at_1000
value: 39.164
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.169000000000004
- type: ndcg_at_1
value: 29.153000000000002
- type: ndcg_at_10
value: 41.439
- type: ndcg_at_100
value: 46.42
- type: ndcg_at_1000
value: 48.242000000000004
- type: ndcg_at_3
value: 36.362
- type: ndcg_at_5
value: 38.743
- type: precision_at_1
value: 29.153000000000002
- type: precision_at_10
value: 6.315999999999999
- type: precision_at_100
value: 0.927
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 15.443000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.424
- type: recall_at_10
value: 55.364000000000004
- type: recall_at_100
value: 78.211
- type: recall_at_1000
value: 91.74600000000001
- type: recall_at_3
value: 41.379
- type: recall_at_5
value: 47.14
- type: map_at_1
value: 19.601
- type: map_at_10
value: 27.826
- type: map_at_100
value: 29.017
- type: map_at_1000
value: 29.137
- type: map_at_3
value: 25.125999999999998
- type: map_at_5
value: 26.765
- type: mrr_at_1
value: 24.005000000000003
- type: mrr_at_10
value: 32.716
- type: mrr_at_100
value: 33.631
- type: mrr_at_1000
value: 33.694
- type: mrr_at_3
value: 29.934
- type: mrr_at_5
value: 31.630999999999997
- type: ndcg_at_1
value: 24.005000000000003
- type: ndcg_at_10
value: 33.158
- type: ndcg_at_100
value: 38.739000000000004
- type: ndcg_at_1000
value: 41.495
- type: ndcg_at_3
value: 28.185
- type: ndcg_at_5
value: 30.796
- type: precision_at_1
value: 24.005000000000003
- type: precision_at_10
value: 5.908
- type: precision_at_100
value: 1.005
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 13.391
- type: precision_at_5
value: 9.876
- type: recall_at_1
value: 19.601
- type: recall_at_10
value: 44.746
- type: recall_at_100
value: 68.82300000000001
- type: recall_at_1000
value: 88.215
- type: recall_at_3
value: 31.239
- type: recall_at_5
value: 37.695
- type: map_at_1
value: 30.130000000000003
- type: map_at_10
value: 40.96
- type: map_at_100
value: 42.282
- type: map_at_1000
value: 42.392
- type: map_at_3
value: 37.889
- type: map_at_5
value: 39.661
- type: mrr_at_1
value: 36.958999999999996
- type: mrr_at_10
value: 46.835
- type: mrr_at_100
value: 47.644
- type: mrr_at_1000
value: 47.688
- type: mrr_at_3
value: 44.562000000000005
- type: mrr_at_5
value: 45.938
- type: ndcg_at_1
value: 36.958999999999996
- type: ndcg_at_10
value: 47.06
- type: ndcg_at_100
value: 52.345
- type: ndcg_at_1000
value: 54.35
- type: ndcg_at_3
value: 42.301
- type: ndcg_at_5
value: 44.635999999999996
- type: precision_at_1
value: 36.958999999999996
- type: precision_at_10
value: 8.479000000000001
- type: precision_at_100
value: 1.284
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 20.244
- type: precision_at_5
value: 14.224999999999998
- type: recall_at_1
value: 30.130000000000003
- type: recall_at_10
value: 59.27
- type: recall_at_100
value: 81.195
- type: recall_at_1000
value: 94.21199999999999
- type: recall_at_3
value: 45.885
- type: recall_at_5
value: 52.016
- type: map_at_1
value: 26.169999999999998
- type: map_at_10
value: 36.451
- type: map_at_100
value: 37.791000000000004
- type: map_at_1000
value: 37.897
- type: map_at_3
value: 33.109
- type: map_at_5
value: 34.937000000000005
- type: mrr_at_1
value: 32.877
- type: mrr_at_10
value: 42.368
- type: mrr_at_100
value: 43.201
- type: mrr_at_1000
value: 43.259
- type: mrr_at_3
value: 39.763999999999996
- type: mrr_at_5
value: 41.260000000000005
- type: ndcg_at_1
value: 32.877
- type: ndcg_at_10
value: 42.659000000000006
- type: ndcg_at_100
value: 48.161
- type: ndcg_at_1000
value: 50.345
- type: ndcg_at_3
value: 37.302
- type: ndcg_at_5
value: 39.722
- type: precision_at_1
value: 32.877
- type: precision_at_10
value: 7.9
- type: precision_at_100
value: 1.236
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 17.846
- type: precision_at_5
value: 12.9
- type: recall_at_1
value: 26.169999999999998
- type: recall_at_10
value: 55.35
- type: recall_at_100
value: 78.755
- type: recall_at_1000
value: 93.518
- type: recall_at_3
value: 40.176
- type: recall_at_5
value: 46.589000000000006
- type: map_at_1
value: 27.15516666666667
- type: map_at_10
value: 36.65741666666667
- type: map_at_100
value: 37.84991666666666
- type: map_at_1000
value: 37.96316666666667
- type: map_at_3
value: 33.74974999999999
- type: map_at_5
value: 35.3765
- type: mrr_at_1
value: 32.08233333333334
- type: mrr_at_10
value: 41.033833333333334
- type: mrr_at_100
value: 41.84524999999999
- type: mrr_at_1000
value: 41.89983333333333
- type: mrr_at_3
value: 38.62008333333333
- type: mrr_at_5
value: 40.03441666666666
- type: ndcg_at_1
value: 32.08233333333334
- type: ndcg_at_10
value: 42.229
- type: ndcg_at_100
value: 47.26716666666667
- type: ndcg_at_1000
value: 49.43466666666667
- type: ndcg_at_3
value: 37.36408333333333
- type: ndcg_at_5
value: 39.6715
- type: precision_at_1
value: 32.08233333333334
- type: precision_at_10
value: 7.382583333333334
- type: precision_at_100
value: 1.16625
- type: precision_at_1000
value: 0.15408333333333332
- type: precision_at_3
value: 17.218
- type: precision_at_5
value: 12.21875
- type: recall_at_1
value: 27.15516666666667
- type: recall_at_10
value: 54.36683333333333
- type: recall_at_100
value: 76.37183333333333
- type: recall_at_1000
value: 91.26183333333333
- type: recall_at_3
value: 40.769916666666674
- type: recall_at_5
value: 46.702333333333335
- type: map_at_1
value: 25.749
- type: map_at_10
value: 33.001999999999995
- type: map_at_100
value: 33.891
- type: map_at_1000
value: 33.993
- type: map_at_3
value: 30.703999999999997
- type: map_at_5
value: 31.959
- type: mrr_at_1
value: 28.834
- type: mrr_at_10
value: 35.955
- type: mrr_at_100
value: 36.709
- type: mrr_at_1000
value: 36.779
- type: mrr_at_3
value: 33.947
- type: mrr_at_5
value: 35.089
- type: ndcg_at_1
value: 28.834
- type: ndcg_at_10
value: 37.329
- type: ndcg_at_100
value: 41.79
- type: ndcg_at_1000
value: 44.169000000000004
- type: ndcg_at_3
value: 33.184999999999995
- type: ndcg_at_5
value: 35.107
- type: precision_at_1
value: 28.834
- type: precision_at_10
value: 5.7669999999999995
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.213000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 25.749
- type: recall_at_10
value: 47.791
- type: recall_at_100
value: 68.255
- type: recall_at_1000
value: 85.749
- type: recall_at_3
value: 36.199
- type: recall_at_5
value: 41.071999999999996
- type: map_at_1
value: 17.777
- type: map_at_10
value: 25.201
- type: map_at_100
value: 26.423999999999996
- type: map_at_1000
value: 26.544
- type: map_at_3
value: 22.869
- type: map_at_5
value: 24.023
- type: mrr_at_1
value: 21.473
- type: mrr_at_10
value: 29.12
- type: mrr_at_100
value: 30.144
- type: mrr_at_1000
value: 30.215999999999998
- type: mrr_at_3
value: 26.933
- type: mrr_at_5
value: 28.051
- type: ndcg_at_1
value: 21.473
- type: ndcg_at_10
value: 30.003
- type: ndcg_at_100
value: 35.766
- type: ndcg_at_1000
value: 38.501000000000005
- type: ndcg_at_3
value: 25.773000000000003
- type: ndcg_at_5
value: 27.462999999999997
- type: precision_at_1
value: 21.473
- type: precision_at_10
value: 5.482
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.205
- type: precision_at_5
value: 8.692
- type: recall_at_1
value: 17.777
- type: recall_at_10
value: 40.582
- type: recall_at_100
value: 66.305
- type: recall_at_1000
value: 85.636
- type: recall_at_3
value: 28.687
- type: recall_at_5
value: 33.089
- type: map_at_1
value: 26.677
- type: map_at_10
value: 36.309000000000005
- type: map_at_100
value: 37.403999999999996
- type: map_at_1000
value: 37.496
- type: map_at_3
value: 33.382
- type: map_at_5
value: 34.98
- type: mrr_at_1
value: 31.343
- type: mrr_at_10
value: 40.549
- type: mrr_at_100
value: 41.342
- type: mrr_at_1000
value: 41.397
- type: mrr_at_3
value: 38.029
- type: mrr_at_5
value: 39.451
- type: ndcg_at_1
value: 31.343
- type: ndcg_at_10
value: 42.1
- type: ndcg_at_100
value: 47.089999999999996
- type: ndcg_at_1000
value: 49.222
- type: ndcg_at_3
value: 36.836999999999996
- type: ndcg_at_5
value: 39.21
- type: precision_at_1
value: 31.343
- type: precision_at_10
value: 7.164
- type: precision_at_100
value: 1.0959999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.915
- type: precision_at_5
value: 11.940000000000001
- type: recall_at_1
value: 26.677
- type: recall_at_10
value: 55.54599999999999
- type: recall_at_100
value: 77.094
- type: recall_at_1000
value: 92.01
- type: recall_at_3
value: 41.191
- type: recall_at_5
value: 47.006
- type: map_at_1
value: 24.501
- type: map_at_10
value: 33.102
- type: map_at_100
value: 34.676
- type: map_at_1000
value: 34.888000000000005
- type: map_at_3
value: 29.944
- type: map_at_5
value: 31.613999999999997
- type: mrr_at_1
value: 29.447000000000003
- type: mrr_at_10
value: 37.996
- type: mrr_at_100
value: 38.946
- type: mrr_at_1000
value: 38.995000000000005
- type: mrr_at_3
value: 35.079
- type: mrr_at_5
value: 36.69
- type: ndcg_at_1
value: 29.447000000000003
- type: ndcg_at_10
value: 39.232
- type: ndcg_at_100
value: 45.247
- type: ndcg_at_1000
value: 47.613
- type: ndcg_at_3
value: 33.922999999999995
- type: ndcg_at_5
value: 36.284
- type: precision_at_1
value: 29.447000000000003
- type: precision_at_10
value: 7.648000000000001
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 16.008
- type: precision_at_5
value: 11.779
- type: recall_at_1
value: 24.501
- type: recall_at_10
value: 51.18899999999999
- type: recall_at_100
value: 78.437
- type: recall_at_1000
value: 92.842
- type: recall_at_3
value: 35.808
- type: recall_at_5
value: 42.197
- type: map_at_1
value: 22.039
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.275
- type: map_at_1000
value: 31.379
- type: map_at_3
value: 27.98
- type: map_at_5
value: 29.358
- type: mrr_at_1
value: 24.03
- type: mrr_at_10
value: 32.568000000000005
- type: mrr_at_100
value: 33.403
- type: mrr_at_1000
value: 33.475
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 31.796000000000003
- type: ndcg_at_1
value: 24.03
- type: ndcg_at_10
value: 35.198
- type: ndcg_at_100
value: 39.668
- type: ndcg_at_1000
value: 42.296
- type: ndcg_at_3
value: 30.709999999999997
- type: ndcg_at_5
value: 33.024
- type: precision_at_1
value: 24.03
- type: precision_at_10
value: 5.564
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 13.309000000000001
- type: precision_at_5
value: 9.39
- type: recall_at_1
value: 22.039
- type: recall_at_10
value: 47.746
- type: recall_at_100
value: 68.23599999999999
- type: recall_at_1000
value: 87.852
- type: recall_at_3
value: 35.852000000000004
- type: recall_at_5
value: 41.410000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.692999999999998
- type: map_at_10
value: 26.903
- type: map_at_100
value: 28.987000000000002
- type: map_at_1000
value: 29.176999999999996
- type: map_at_3
value: 22.137
- type: map_at_5
value: 24.758
- type: mrr_at_1
value: 35.57
- type: mrr_at_10
value: 47.821999999999996
- type: mrr_at_100
value: 48.608000000000004
- type: mrr_at_1000
value: 48.638999999999996
- type: mrr_at_3
value: 44.452000000000005
- type: mrr_at_5
value: 46.546
- type: ndcg_at_1
value: 35.57
- type: ndcg_at_10
value: 36.567
- type: ndcg_at_100
value: 44.085
- type: ndcg_at_1000
value: 47.24
- type: ndcg_at_3
value: 29.964000000000002
- type: ndcg_at_5
value: 32.511
- type: precision_at_1
value: 35.57
- type: precision_at_10
value: 11.485
- type: precision_at_100
value: 1.9619999999999997
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 22.237000000000002
- type: precision_at_5
value: 17.471999999999998
- type: recall_at_1
value: 15.692999999999998
- type: recall_at_10
value: 43.056
- type: recall_at_100
value: 68.628
- type: recall_at_1000
value: 86.075
- type: recall_at_3
value: 26.918999999999997
- type: recall_at_5
value: 34.14
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.53
- type: map_at_10
value: 20.951
- type: map_at_100
value: 30.136000000000003
- type: map_at_1000
value: 31.801000000000002
- type: map_at_3
value: 15.021
- type: map_at_5
value: 17.471999999999998
- type: mrr_at_1
value: 71.0
- type: mrr_at_10
value: 79.176
- type: mrr_at_100
value: 79.418
- type: mrr_at_1000
value: 79.426
- type: mrr_at_3
value: 78.125
- type: mrr_at_5
value: 78.61200000000001
- type: ndcg_at_1
value: 58.5
- type: ndcg_at_10
value: 44.106
- type: ndcg_at_100
value: 49.268
- type: ndcg_at_1000
value: 56.711999999999996
- type: ndcg_at_3
value: 48.934
- type: ndcg_at_5
value: 45.826
- type: precision_at_1
value: 71.0
- type: precision_at_10
value: 35.0
- type: precision_at_100
value: 11.360000000000001
- type: precision_at_1000
value: 2.046
- type: precision_at_3
value: 52.833
- type: precision_at_5
value: 44.15
- type: recall_at_1
value: 9.53
- type: recall_at_10
value: 26.811
- type: recall_at_100
value: 55.916999999999994
- type: recall_at_1000
value: 79.973
- type: recall_at_3
value: 16.413
- type: recall_at_5
value: 19.980999999999998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.519999999999996
- type: f1
value: 46.36601294761231
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.413
- type: map_at_10
value: 83.414
- type: map_at_100
value: 83.621
- type: map_at_1000
value: 83.635
- type: map_at_3
value: 82.337
- type: map_at_5
value: 83.039
- type: mrr_at_1
value: 80.19800000000001
- type: mrr_at_10
value: 87.715
- type: mrr_at_100
value: 87.778
- type: mrr_at_1000
value: 87.779
- type: mrr_at_3
value: 87.106
- type: mrr_at_5
value: 87.555
- type: ndcg_at_1
value: 80.19800000000001
- type: ndcg_at_10
value: 87.182
- type: ndcg_at_100
value: 87.90299999999999
- type: ndcg_at_1000
value: 88.143
- type: ndcg_at_3
value: 85.60600000000001
- type: ndcg_at_5
value: 86.541
- type: precision_at_1
value: 80.19800000000001
- type: precision_at_10
value: 10.531
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.933
- type: precision_at_5
value: 20.429
- type: recall_at_1
value: 74.413
- type: recall_at_10
value: 94.363
- type: recall_at_100
value: 97.165
- type: recall_at_1000
value: 98.668
- type: recall_at_3
value: 90.108
- type: recall_at_5
value: 92.52
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.701
- type: map_at_10
value: 37.122
- type: map_at_100
value: 39.178000000000004
- type: map_at_1000
value: 39.326
- type: map_at_3
value: 32.971000000000004
- type: map_at_5
value: 35.332
- type: mrr_at_1
value: 44.753
- type: mrr_at_10
value: 53.452
- type: mrr_at_100
value: 54.198
- type: mrr_at_1000
value: 54.225
- type: mrr_at_3
value: 50.952
- type: mrr_at_5
value: 52.464
- type: ndcg_at_1
value: 44.753
- type: ndcg_at_10
value: 45.021
- type: ndcg_at_100
value: 52.028
- type: ndcg_at_1000
value: 54.596000000000004
- type: ndcg_at_3
value: 41.622
- type: ndcg_at_5
value: 42.736000000000004
- type: precision_at_1
value: 44.753
- type: precision_at_10
value: 12.284
- type: precision_at_100
value: 1.955
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 27.828999999999997
- type: precision_at_5
value: 20.061999999999998
- type: recall_at_1
value: 22.701
- type: recall_at_10
value: 51.432
- type: recall_at_100
value: 77.009
- type: recall_at_1000
value: 92.511
- type: recall_at_3
value: 37.919000000000004
- type: recall_at_5
value: 44.131
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.189
- type: map_at_10
value: 66.24600000000001
- type: map_at_100
value: 67.098
- type: map_at_1000
value: 67.149
- type: map_at_3
value: 62.684
- type: map_at_5
value: 64.974
- type: mrr_at_1
value: 80.378
- type: mrr_at_10
value: 86.127
- type: mrr_at_100
value: 86.29299999999999
- type: mrr_at_1000
value: 86.297
- type: mrr_at_3
value: 85.31400000000001
- type: mrr_at_5
value: 85.858
- type: ndcg_at_1
value: 80.378
- type: ndcg_at_10
value: 74.101
- type: ndcg_at_100
value: 76.993
- type: ndcg_at_1000
value: 77.948
- type: ndcg_at_3
value: 69.232
- type: ndcg_at_5
value: 72.04599999999999
- type: precision_at_1
value: 80.378
- type: precision_at_10
value: 15.595999999999998
- type: precision_at_100
value: 1.7840000000000003
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 44.884
- type: precision_at_5
value: 29.145
- type: recall_at_1
value: 40.189
- type: recall_at_10
value: 77.981
- type: recall_at_100
value: 89.21
- type: recall_at_1000
value: 95.48299999999999
- type: recall_at_3
value: 67.326
- type: recall_at_5
value: 72.863
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 92.84599999999999
- type: ap
value: 89.4710787567357
- type: f1
value: 92.83752676932258
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.132
- type: map_at_10
value: 35.543
- type: map_at_100
value: 36.702
- type: map_at_1000
value: 36.748999999999995
- type: map_at_3
value: 31.737
- type: map_at_5
value: 33.927
- type: mrr_at_1
value: 23.782
- type: mrr_at_10
value: 36.204
- type: mrr_at_100
value: 37.29
- type: mrr_at_1000
value: 37.330999999999996
- type: mrr_at_3
value: 32.458999999999996
- type: mrr_at_5
value: 34.631
- type: ndcg_at_1
value: 23.782
- type: ndcg_at_10
value: 42.492999999999995
- type: ndcg_at_100
value: 47.985
- type: ndcg_at_1000
value: 49.141
- type: ndcg_at_3
value: 34.748000000000005
- type: ndcg_at_5
value: 38.651
- type: precision_at_1
value: 23.782
- type: precision_at_10
value: 6.665
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.776
- type: precision_at_5
value: 10.84
- type: recall_at_1
value: 23.132
- type: recall_at_10
value: 63.794
- type: recall_at_100
value: 89.027
- type: recall_at_1000
value: 97.807
- type: recall_at_3
value: 42.765
- type: recall_at_5
value: 52.11
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.59188326493388
- type: f1
value: 94.3842594786827
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.49384404924761
- type: f1
value: 59.7580539534629
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.56220578345663
- type: f1
value: 75.27228165561478
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.53463349024884
- type: f1
value: 80.4893958236536
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.56100273484962
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.470380028839607
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.06102792457849
- type: mrr
value: 33.30709199672238
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.776999999999999
- type: map_at_10
value: 14.924000000000001
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.538999999999998
- type: map_at_3
value: 10.982
- type: map_at_5
value: 12.679000000000002
- type: mrr_at_1
value: 47.988
- type: mrr_at_10
value: 57.232000000000006
- type: mrr_at_100
value: 57.818999999999996
- type: mrr_at_1000
value: 57.847
- type: mrr_at_3
value: 54.901999999999994
- type: mrr_at_5
value: 56.481
- type: ndcg_at_1
value: 46.594
- type: ndcg_at_10
value: 38.129000000000005
- type: ndcg_at_100
value: 35.54
- type: ndcg_at_1000
value: 44.172
- type: ndcg_at_3
value: 43.025999999999996
- type: ndcg_at_5
value: 41.052
- type: precision_at_1
value: 47.988
- type: precision_at_10
value: 28.111000000000004
- type: precision_at_100
value: 8.929
- type: precision_at_1000
value: 2.185
- type: precision_at_3
value: 40.144000000000005
- type: precision_at_5
value: 35.232
- type: recall_at_1
value: 6.776999999999999
- type: recall_at_10
value: 19.289
- type: recall_at_100
value: 36.359
- type: recall_at_1000
value: 67.54
- type: recall_at_3
value: 11.869
- type: recall_at_5
value: 14.999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.108000000000004
- type: map_at_10
value: 47.126000000000005
- type: map_at_100
value: 48.171
- type: map_at_1000
value: 48.199
- type: map_at_3
value: 42.734
- type: map_at_5
value: 45.362
- type: mrr_at_1
value: 34.936
- type: mrr_at_10
value: 49.571
- type: mrr_at_100
value: 50.345
- type: mrr_at_1000
value: 50.363
- type: mrr_at_3
value: 45.959
- type: mrr_at_5
value: 48.165
- type: ndcg_at_1
value: 34.936
- type: ndcg_at_10
value: 55.028999999999996
- type: ndcg_at_100
value: 59.244
- type: ndcg_at_1000
value: 59.861
- type: ndcg_at_3
value: 46.872
- type: ndcg_at_5
value: 51.217999999999996
- type: precision_at_1
value: 34.936
- type: precision_at_10
value: 9.099
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 21.456
- type: precision_at_5
value: 15.411
- type: recall_at_1
value: 31.108000000000004
- type: recall_at_10
value: 76.53999999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.947
- type: recall_at_3
value: 55.572
- type: recall_at_5
value: 65.525
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.56400000000001
- type: map_at_10
value: 85.482
- type: map_at_100
value: 86.114
- type: map_at_1000
value: 86.13
- type: map_at_3
value: 82.607
- type: map_at_5
value: 84.405
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.304
- type: mrr_at_100
value: 88.399
- type: mrr_at_1000
value: 88.399
- type: mrr_at_3
value: 87.37
- type: mrr_at_5
value: 88.024
- type: ndcg_at_1
value: 82.45
- type: ndcg_at_10
value: 89.06500000000001
- type: ndcg_at_100
value: 90.232
- type: ndcg_at_1000
value: 90.305
- type: ndcg_at_3
value: 86.375
- type: ndcg_at_5
value: 87.85300000000001
- type: precision_at_1
value: 82.45
- type: precision_at_10
value: 13.486999999999998
- type: precision_at_100
value: 1.534
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.813
- type: precision_at_5
value: 24.773999999999997
- type: recall_at_1
value: 71.56400000000001
- type: recall_at_10
value: 95.812
- type: recall_at_100
value: 99.7
- type: recall_at_1000
value: 99.979
- type: recall_at_3
value: 87.966
- type: recall_at_5
value: 92.268
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 57.241876648614145
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.66212576446223
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.308
- type: map_at_10
value: 13.803
- type: map_at_100
value: 16.176
- type: map_at_1000
value: 16.561
- type: map_at_3
value: 9.761000000000001
- type: map_at_5
value: 11.802
- type: mrr_at_1
value: 26.200000000000003
- type: mrr_at_10
value: 37.621
- type: mrr_at_100
value: 38.767
- type: mrr_at_1000
value: 38.815
- type: mrr_at_3
value: 34.117
- type: mrr_at_5
value: 36.107
- type: ndcg_at_1
value: 26.200000000000003
- type: ndcg_at_10
value: 22.64
- type: ndcg_at_100
value: 31.567
- type: ndcg_at_1000
value: 37.623
- type: ndcg_at_3
value: 21.435000000000002
- type: ndcg_at_5
value: 18.87
- type: precision_at_1
value: 26.200000000000003
- type: precision_at_10
value: 11.74
- type: precision_at_100
value: 2.465
- type: precision_at_1000
value: 0.391
- type: precision_at_3
value: 20.033
- type: precision_at_5
value: 16.64
- type: recall_at_1
value: 5.308
- type: recall_at_10
value: 23.794999999999998
- type: recall_at_100
value: 50.015
- type: recall_at_1000
value: 79.283
- type: recall_at_3
value: 12.178
- type: recall_at_5
value: 16.882
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.93231134675553
- type: cos_sim_spearman
value: 81.68319292603205
- type: euclidean_pearson
value: 81.8396814380367
- type: euclidean_spearman
value: 81.24641903349945
- type: manhattan_pearson
value: 81.84698799204274
- type: manhattan_spearman
value: 81.24269997904105
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.73241671587446
- type: cos_sim_spearman
value: 79.05091082971826
- type: euclidean_pearson
value: 83.91146869578044
- type: euclidean_spearman
value: 79.87978465370936
- type: manhattan_pearson
value: 83.90888338917678
- type: manhattan_spearman
value: 79.87482848584241
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.14970731146177
- type: cos_sim_spearman
value: 86.37363490084627
- type: euclidean_pearson
value: 83.02154218530433
- type: euclidean_spearman
value: 83.80258761957367
- type: manhattan_pearson
value: 83.01664495119347
- type: manhattan_spearman
value: 83.77567458007952
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.40474139886784
- type: cos_sim_spearman
value: 82.77768789165984
- type: euclidean_pearson
value: 80.7065877443695
- type: euclidean_spearman
value: 81.375940662505
- type: manhattan_pearson
value: 80.6507552270278
- type: manhattan_spearman
value: 81.32782179098741
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.08585968722274
- type: cos_sim_spearman
value: 88.03110031451399
- type: euclidean_pearson
value: 85.74012019602384
- type: euclidean_spearman
value: 86.13592849438209
- type: manhattan_pearson
value: 85.74404842369206
- type: manhattan_spearman
value: 86.14492318960154
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.95069052788875
- type: cos_sim_spearman
value: 86.4867991595147
- type: euclidean_pearson
value: 84.31013325754635
- type: euclidean_spearman
value: 85.01529258006482
- type: manhattan_pearson
value: 84.26995570085374
- type: manhattan_spearman
value: 84.96982104986162
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.54617647971897
- type: cos_sim_spearman
value: 87.49834181751034
- type: euclidean_pearson
value: 86.01015322577122
- type: euclidean_spearman
value: 84.63362652063199
- type: manhattan_pearson
value: 86.13807574475706
- type: manhattan_spearman
value: 84.7772370721132
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.20047755786615
- type: cos_sim_spearman
value: 67.05324077987636
- type: euclidean_pearson
value: 66.91930642976601
- type: euclidean_spearman
value: 65.21491856099105
- type: manhattan_pearson
value: 66.78756851976624
- type: manhattan_spearman
value: 65.12356257740728
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.19852871539686
- type: cos_sim_spearman
value: 87.5161895296395
- type: euclidean_pearson
value: 84.59848645207485
- type: euclidean_spearman
value: 85.26427328757919
- type: manhattan_pearson
value: 84.59747366996524
- type: manhattan_spearman
value: 85.24045855146915
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.63320317811032
- type: mrr
value: 96.26242947321379
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.928000000000004
- type: map_at_10
value: 70.112
- type: map_at_100
value: 70.59299999999999
- type: map_at_1000
value: 70.623
- type: map_at_3
value: 66.846
- type: map_at_5
value: 68.447
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 71.212
- type: mrr_at_100
value: 71.616
- type: mrr_at_1000
value: 71.64500000000001
- type: mrr_at_3
value: 68.77799999999999
- type: mrr_at_5
value: 70.094
- type: ndcg_at_1
value: 64.0
- type: ndcg_at_10
value: 74.607
- type: ndcg_at_100
value: 76.416
- type: ndcg_at_1000
value: 77.102
- type: ndcg_at_3
value: 69.126
- type: ndcg_at_5
value: 71.41300000000001
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 9.933
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.556
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 60.928000000000004
- type: recall_at_10
value: 87.322
- type: recall_at_100
value: 94.833
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 72.628
- type: recall_at_5
value: 78.428
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.86237623762376
- type: cos_sim_ap
value: 96.72586477206649
- type: cos_sim_f1
value: 93.01858362631845
- type: cos_sim_precision
value: 93.4409687184662
- type: cos_sim_recall
value: 92.60000000000001
- type: dot_accuracy
value: 99.78019801980199
- type: dot_ap
value: 93.72748205246228
- type: dot_f1
value: 89.04109589041096
- type: dot_precision
value: 87.16475095785441
- type: dot_recall
value: 91.0
- type: euclidean_accuracy
value: 99.85445544554456
- type: euclidean_ap
value: 96.6661459876145
- type: euclidean_f1
value: 92.58337481333997
- type: euclidean_precision
value: 92.17046580773042
- type: euclidean_recall
value: 93.0
- type: manhattan_accuracy
value: 99.85445544554456
- type: manhattan_ap
value: 96.6883549244056
- type: manhattan_f1
value: 92.57598405580468
- type: manhattan_precision
value: 92.25422045680239
- type: manhattan_recall
value: 92.9
- type: max_accuracy
value: 99.86237623762376
- type: max_ap
value: 96.72586477206649
- type: max_f1
value: 93.01858362631845
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.39930057069995
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.96398659903402
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.946944700355395
- type: mrr
value: 56.97151398438164
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.541657650692905
- type: cos_sim_spearman
value: 31.605804192286303
- type: dot_pearson
value: 28.26905996736398
- type: dot_spearman
value: 27.864801765851187
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22599999999999998
- type: map_at_10
value: 1.8870000000000002
- type: map_at_100
value: 9.78
- type: map_at_1000
value: 22.514
- type: map_at_3
value: 0.6669999999999999
- type: map_at_5
value: 1.077
- type: mrr_at_1
value: 82.0
- type: mrr_at_10
value: 89.86699999999999
- type: mrr_at_100
value: 89.86699999999999
- type: mrr_at_1000
value: 89.86699999999999
- type: mrr_at_3
value: 89.667
- type: mrr_at_5
value: 89.667
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 74.818
- type: ndcg_at_100
value: 53.715999999999994
- type: ndcg_at_1000
value: 47.082
- type: ndcg_at_3
value: 82.134
- type: ndcg_at_5
value: 79.81899999999999
- type: precision_at_1
value: 82.0
- type: precision_at_10
value: 78.0
- type: precision_at_100
value: 54.48
- type: precision_at_1000
value: 20.518
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 85.2
- type: recall_at_1
value: 0.22599999999999998
- type: recall_at_10
value: 2.072
- type: recall_at_100
value: 13.013
- type: recall_at_1000
value: 43.462
- type: recall_at_3
value: 0.695
- type: recall_at_5
value: 1.139
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.328
- type: map_at_10
value: 9.795
- type: map_at_100
value: 15.801000000000002
- type: map_at_1000
value: 17.23
- type: map_at_3
value: 4.734
- type: map_at_5
value: 6.644
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 46.902
- type: mrr_at_100
value: 47.495
- type: mrr_at_1000
value: 47.495
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 44.218
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 24.806
- type: ndcg_at_100
value: 36.419000000000004
- type: ndcg_at_1000
value: 47.272999999999996
- type: ndcg_at_3
value: 25.666
- type: ndcg_at_5
value: 25.448999999999998
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 23.061
- type: precision_at_100
value: 7.714
- type: precision_at_1000
value: 1.484
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.328
- type: recall_at_10
value: 16.524
- type: recall_at_100
value: 47.179
- type: recall_at_1000
value: 81.22200000000001
- type: recall_at_3
value: 5.745
- type: recall_at_5
value: 9.339
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.9142
- type: ap
value: 14.335574772555415
- type: f1
value: 54.62839595194111
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.94340690435768
- type: f1
value: 60.286487936731916
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.26597708987974
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.48882398521786
- type: cos_sim_ap
value: 79.04326607602204
- type: cos_sim_f1
value: 71.64566826860633
- type: cos_sim_precision
value: 70.55512918905092
- type: cos_sim_recall
value: 72.77044854881267
- type: dot_accuracy
value: 84.19264469213805
- type: dot_ap
value: 67.96360043562528
- type: dot_f1
value: 64.06418393006827
- type: dot_precision
value: 58.64941898706424
- type: dot_recall
value: 70.58047493403694
- type: euclidean_accuracy
value: 87.45902127913214
- type: euclidean_ap
value: 78.9742237648272
- type: euclidean_f1
value: 71.5553235908142
- type: euclidean_precision
value: 70.77955601445535
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.41729749061214
- type: manhattan_ap
value: 78.90073137580596
- type: manhattan_f1
value: 71.3942611553533
- type: manhattan_precision
value: 68.52705653967483
- type: manhattan_recall
value: 74.51187335092348
- type: max_accuracy
value: 87.48882398521786
- type: max_ap
value: 79.04326607602204
- type: max_f1
value: 71.64566826860633
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.68125897465751
- type: cos_sim_ap
value: 85.6003454431979
- type: cos_sim_f1
value: 77.6957163958641
- type: cos_sim_precision
value: 73.0110366307807
- type: cos_sim_recall
value: 83.02279026793964
- type: dot_accuracy
value: 87.7672992587418
- type: dot_ap
value: 82.4971301112899
- type: dot_f1
value: 75.90528233151184
- type: dot_precision
value: 72.0370626469368
- type: dot_recall
value: 80.21250384970742
- type: euclidean_accuracy
value: 88.4503434625684
- type: euclidean_ap
value: 84.91949884748384
- type: euclidean_f1
value: 76.92365018444684
- type: euclidean_precision
value: 74.53245721712759
- type: euclidean_recall
value: 79.47336002463813
- type: manhattan_accuracy
value: 88.47556952691427
- type: manhattan_ap
value: 84.8963689101517
- type: manhattan_f1
value: 76.85901249256395
- type: manhattan_precision
value: 74.31693989071039
- type: manhattan_recall
value: 79.58115183246073
- type: max_accuracy
value: 88.68125897465751
- type: max_ap
value: 85.6003454431979
- type: max_f1
value: 77.6957163958641
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
- **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
- **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
- **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
- **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
- **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
## News
- 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
It is the first embedding model that supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
[Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
- 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
- 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire:
- 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
- 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
#### Usage of the ONNX files
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-en-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13")
model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13",file_name="onnx/model.onnx")
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
model_output_ort = model_ort(**encoded_input)
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# model_output and model_output_ort are identical
```
Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package.
```python
import asyncio
from infinity_emb import AsyncEmbeddingEngine, EngineArgs
sentences = ["Embed this is sentence via Infinity.", "Paris is in France."]
engine = AsyncEmbeddingEngine.from_args(
EngineArgs(model_name_or_path = "BAAI/bge-large-en-v1.5", device="cpu", engine="optimum" # or engine="torch"
))
async def main():
async with engine:
embeddings, usage = await engine.embed(sentences=sentences)
asyncio.run(main())
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
| [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BEAR",
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp-supervised | McGill-NLP | sentence-similarity | [
"peft",
"safetensors",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2404.05961",
"license:mit",
"model-index",
"region:us"
] | 1,712 | 1,712 | 1,163 | 4 | ---
language:
- en
library_name: peft
license: mit
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- text-reranking
- feature-extraction
- sentence-similarity
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
model-index:
- name: LLM2Vec-Sheared-LLaMA-supervised
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.41791044776119
- type: ap
value: 41.45458580415683
- type: f1
value: 71.63305447032735
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 82.0527
- type: ap
value: 77.3222852456055
- type: f1
value: 81.97981459031165
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.806000000000004
- type: f1
value: 40.3299129176701
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.391000000000002
- type: map_at_10
value: 41.919000000000004
- type: map_at_100
value: 42.846000000000004
- type: map_at_1000
value: 42.851
- type: map_at_3
value: 36.260999999999996
- type: map_at_5
value: 39.528999999999996
- type: mrr_at_1
value: 26.245
- type: mrr_at_10
value: 42.215
- type: mrr_at_100
value: 43.135
- type: mrr_at_1000
value: 43.14
- type: mrr_at_3
value: 36.546
- type: mrr_at_5
value: 39.782000000000004
- type: ndcg_at_1
value: 25.391000000000002
- type: ndcg_at_10
value: 51.663000000000004
- type: ndcg_at_100
value: 55.419
- type: ndcg_at_1000
value: 55.517
- type: ndcg_at_3
value: 39.96
- type: ndcg_at_5
value: 45.909
- type: precision_at_1
value: 25.391000000000002
- type: precision_at_10
value: 8.3
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.904
- type: precision_at_5
value: 13.058
- type: recall_at_1
value: 25.391000000000002
- type: recall_at_10
value: 83.001
- type: recall_at_100
value: 98.933
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 50.711
- type: recall_at_5
value: 65.292
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.472186058302285
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 39.846039374129546
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.713811638804174
- type: mrr
value: 73.38906476718111
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 85.88328221005123
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.00974025974025
- type: f1
value: 85.97349359388288
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.102075665637685
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.27583239919031
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: cqadupstack/android
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.043
- type: map_at_10
value: 44.515
- type: map_at_100
value: 45.967999999999996
- type: map_at_1000
value: 46.098
- type: map_at_3
value: 40.285
- type: map_at_5
value: 42.841
- type: mrr_at_1
value: 40.2
- type: mrr_at_10
value: 50.233000000000004
- type: mrr_at_100
value: 50.938
- type: mrr_at_1000
value: 50.978
- type: mrr_at_3
value: 47.353
- type: mrr_at_5
value: 49.034
- type: ndcg_at_1
value: 40.2
- type: ndcg_at_10
value: 51.096
- type: ndcg_at_100
value: 56.267999999999994
- type: ndcg_at_1000
value: 58.092999999999996
- type: ndcg_at_3
value: 45.09
- type: ndcg_at_5
value: 48.198
- type: precision_at_1
value: 40.2
- type: precision_at_10
value: 9.843
- type: precision_at_100
value: 1.546
- type: precision_at_1000
value: 0.20400000000000001
- type: precision_at_3
value: 21.507
- type: precision_at_5
value: 15.966
- type: recall_at_1
value: 33.043
- type: recall_at_10
value: 63.871
- type: recall_at_100
value: 85.527
- type: recall_at_1000
value: 96.936
- type: recall_at_3
value: 46.859
- type: recall_at_5
value: 55.116
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: cqadupstack/english
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.924000000000003
- type: map_at_10
value: 42.298
- type: map_at_100
value: 43.589
- type: map_at_1000
value: 43.724000000000004
- type: map_at_3
value: 39.739999999999995
- type: map_at_5
value: 41.131
- type: mrr_at_1
value: 40.064
- type: mrr_at_10
value: 48.4
- type: mrr_at_100
value: 49.07
- type: mrr_at_1000
value: 49.113
- type: mrr_at_3
value: 46.635
- type: mrr_at_5
value: 47.549
- type: ndcg_at_1
value: 40.064
- type: ndcg_at_10
value: 47.686
- type: ndcg_at_100
value: 52.054
- type: ndcg_at_1000
value: 54.151
- type: ndcg_at_3
value: 44.57
- type: ndcg_at_5
value: 45.727000000000004
- type: precision_at_1
value: 40.064
- type: precision_at_10
value: 8.770999999999999
- type: precision_at_100
value: 1.422
- type: precision_at_1000
value: 0.19
- type: precision_at_3
value: 21.741
- type: precision_at_5
value: 14.790000000000001
- type: recall_at_1
value: 31.924000000000003
- type: recall_at_10
value: 56.603
- type: recall_at_100
value: 74.82900000000001
- type: recall_at_1000
value: 88.176
- type: recall_at_3
value: 46.11
- type: recall_at_5
value: 50.273999999999994
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: cqadupstack/gaming
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.721000000000004
- type: map_at_10
value: 53.053
- type: map_at_100
value: 54.103
- type: map_at_1000
value: 54.157999999999994
- type: map_at_3
value: 49.854
- type: map_at_5
value: 51.547
- type: mrr_at_1
value: 46.833999999999996
- type: mrr_at_10
value: 56.61000000000001
- type: mrr_at_100
value: 57.286
- type: mrr_at_1000
value: 57.312
- type: mrr_at_3
value: 54.17999999999999
- type: mrr_at_5
value: 55.503
- type: ndcg_at_1
value: 46.833999999999996
- type: ndcg_at_10
value: 58.928000000000004
- type: ndcg_at_100
value: 62.939
- type: ndcg_at_1000
value: 63.970000000000006
- type: ndcg_at_3
value: 53.599
- type: ndcg_at_5
value: 55.96600000000001
- type: precision_at_1
value: 46.833999999999996
- type: precision_at_10
value: 9.48
- type: precision_at_100
value: 1.2349999999999999
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.032999999999998
- type: precision_at_5
value: 16.213
- type: recall_at_1
value: 40.721000000000004
- type: recall_at_10
value: 72.653
- type: recall_at_100
value: 89.91900000000001
- type: recall_at_1000
value: 97.092
- type: recall_at_3
value: 58.135999999999996
- type: recall_at_5
value: 64.156
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: cqadupstack/gis
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.938
- type: map_at_10
value: 34.027
- type: map_at_100
value: 34.999
- type: map_at_1000
value: 35.083
- type: map_at_3
value: 31.154
- type: map_at_5
value: 32.767
- type: mrr_at_1
value: 27.006000000000004
- type: mrr_at_10
value: 36.192
- type: mrr_at_100
value: 36.989
- type: mrr_at_1000
value: 37.053999999999995
- type: mrr_at_3
value: 33.503
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 27.006000000000004
- type: ndcg_at_10
value: 39.297
- type: ndcg_at_100
value: 44.078
- type: ndcg_at_1000
value: 46.162
- type: ndcg_at_3
value: 33.695
- type: ndcg_at_5
value: 36.401
- type: precision_at_1
value: 27.006000000000004
- type: precision_at_10
value: 6.181
- type: precision_at_100
value: 0.905
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 14.426
- type: precision_at_5
value: 10.215
- type: recall_at_1
value: 24.938
- type: recall_at_10
value: 53.433
- type: recall_at_100
value: 75.558
- type: recall_at_1000
value: 91.096
- type: recall_at_3
value: 38.421
- type: recall_at_5
value: 44.906
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: cqadupstack/mathematica
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.565999999999999
- type: map_at_10
value: 23.419999999999998
- type: map_at_100
value: 24.678
- type: map_at_1000
value: 24.801000000000002
- type: map_at_3
value: 20.465
- type: map_at_5
value: 21.979000000000003
- type: mrr_at_1
value: 19.652
- type: mrr_at_10
value: 27.929
- type: mrr_at_100
value: 28.92
- type: mrr_at_1000
value: 28.991
- type: mrr_at_3
value: 25.249
- type: mrr_at_5
value: 26.66
- type: ndcg_at_1
value: 19.652
- type: ndcg_at_10
value: 28.869
- type: ndcg_at_100
value: 34.675
- type: ndcg_at_1000
value: 37.577
- type: ndcg_at_3
value: 23.535
- type: ndcg_at_5
value: 25.807999999999996
- type: precision_at_1
value: 19.652
- type: precision_at_10
value: 5.659
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 11.401
- type: precision_at_5
value: 8.581999999999999
- type: recall_at_1
value: 15.565999999999999
- type: recall_at_10
value: 41.163
- type: recall_at_100
value: 66.405
- type: recall_at_1000
value: 87.071
- type: recall_at_3
value: 26.478
- type: recall_at_5
value: 32.217
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: cqadupstack/physics
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.834
- type: map_at_10
value: 41.49
- type: map_at_100
value: 42.897999999999996
- type: map_at_1000
value: 43.004
- type: map_at_3
value: 38.151
- type: map_at_5
value: 40.157
- type: mrr_at_1
value: 38.306000000000004
- type: mrr_at_10
value: 47.371
- type: mrr_at_100
value: 48.265
- type: mrr_at_1000
value: 48.304
- type: mrr_at_3
value: 44.915
- type: mrr_at_5
value: 46.516999999999996
- type: ndcg_at_1
value: 38.306000000000004
- type: ndcg_at_10
value: 47.394999999999996
- type: ndcg_at_100
value: 53.086999999999996
- type: ndcg_at_1000
value: 54.94799999999999
- type: ndcg_at_3
value: 42.384
- type: ndcg_at_5
value: 45.055
- type: precision_at_1
value: 38.306000000000004
- type: precision_at_10
value: 8.624
- type: precision_at_100
value: 1.325
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.18
- type: precision_at_5
value: 14.418000000000001
- type: recall_at_1
value: 30.834
- type: recall_at_10
value: 58.977000000000004
- type: recall_at_100
value: 82.78
- type: recall_at_1000
value: 94.825
- type: recall_at_3
value: 44.954
- type: recall_at_5
value: 51.925
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: cqadupstack/programmers
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.549000000000003
- type: map_at_10
value: 38.796
- type: map_at_100
value: 40.085
- type: map_at_1000
value: 40.198
- type: map_at_3
value: 35.412
- type: map_at_5
value: 37.116
- type: mrr_at_1
value: 35.388
- type: mrr_at_10
value: 44.626
- type: mrr_at_100
value: 45.445
- type: mrr_at_1000
value: 45.491
- type: mrr_at_3
value: 41.952
- type: mrr_at_5
value: 43.368
- type: ndcg_at_1
value: 35.388
- type: ndcg_at_10
value: 44.894
- type: ndcg_at_100
value: 50.166999999999994
- type: ndcg_at_1000
value: 52.308
- type: ndcg_at_3
value: 39.478
- type: ndcg_at_5
value: 41.608000000000004
- type: precision_at_1
value: 35.388
- type: precision_at_10
value: 8.322000000000001
- type: precision_at_100
value: 1.2670000000000001
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 18.836
- type: precision_at_5
value: 13.333
- type: recall_at_1
value: 28.549000000000003
- type: recall_at_10
value: 57.229
- type: recall_at_100
value: 79.541
- type: recall_at_1000
value: 93.887
- type: recall_at_3
value: 42.056
- type: recall_at_5
value: 47.705999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.897333333333336
- type: map_at_10
value: 36.28758333333334
- type: map_at_100
value: 37.480083333333326
- type: map_at_1000
value: 37.59683333333333
- type: map_at_3
value: 33.3485
- type: map_at_5
value: 34.98283333333334
- type: mrr_at_1
value: 31.98916666666667
- type: mrr_at_10
value: 40.61116666666666
- type: mrr_at_100
value: 41.42133333333333
- type: mrr_at_1000
value: 41.476333333333336
- type: mrr_at_3
value: 38.19366666666667
- type: mrr_at_5
value: 39.53125
- type: ndcg_at_1
value: 31.98916666666667
- type: ndcg_at_10
value: 41.73475
- type: ndcg_at_100
value: 46.72291666666666
- type: ndcg_at_1000
value: 48.94916666666666
- type: ndcg_at_3
value: 36.883833333333335
- type: ndcg_at_5
value: 39.114
- type: precision_at_1
value: 31.98916666666667
- type: precision_at_10
value: 7.364083333333335
- type: precision_at_100
value: 1.1604166666666667
- type: precision_at_1000
value: 0.15433333333333335
- type: precision_at_3
value: 17.067500000000003
- type: precision_at_5
value: 12.091916666666666
- type: recall_at_1
value: 26.897333333333336
- type: recall_at_10
value: 53.485749999999996
- type: recall_at_100
value: 75.38716666666666
- type: recall_at_1000
value: 90.75841666666666
- type: recall_at_3
value: 39.86725
- type: recall_at_5
value: 45.683416666666666
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: cqadupstack/stats
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.544
- type: map_at_10
value: 30.85
- type: map_at_100
value: 31.674000000000003
- type: map_at_1000
value: 31.778000000000002
- type: map_at_3
value: 28.451999999999998
- type: map_at_5
value: 29.797
- type: mrr_at_1
value: 26.687
- type: mrr_at_10
value: 33.725
- type: mrr_at_100
value: 34.439
- type: mrr_at_1000
value: 34.512
- type: mrr_at_3
value: 31.493
- type: mrr_at_5
value: 32.735
- type: ndcg_at_1
value: 26.687
- type: ndcg_at_10
value: 35.207
- type: ndcg_at_100
value: 39.406
- type: ndcg_at_1000
value: 42.021
- type: ndcg_at_3
value: 30.842000000000002
- type: ndcg_at_5
value: 32.882
- type: precision_at_1
value: 26.687
- type: precision_at_10
value: 5.66
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 13.395000000000001
- type: precision_at_5
value: 9.386999999999999
- type: recall_at_1
value: 23.544
- type: recall_at_10
value: 45.769
- type: recall_at_100
value: 65.33200000000001
- type: recall_at_1000
value: 84.82499999999999
- type: recall_at_3
value: 33.665
- type: recall_at_5
value: 38.795
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: cqadupstack/tex
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.524
- type: map_at_10
value: 23.65
- type: map_at_100
value: 24.654999999999998
- type: map_at_1000
value: 24.786
- type: map_at_3
value: 21.441
- type: map_at_5
value: 22.664
- type: mrr_at_1
value: 20.372
- type: mrr_at_10
value: 27.548000000000002
- type: mrr_at_100
value: 28.37
- type: mrr_at_1000
value: 28.449
- type: mrr_at_3
value: 25.291999999999998
- type: mrr_at_5
value: 26.596999999999998
- type: ndcg_at_1
value: 20.372
- type: ndcg_at_10
value: 28.194000000000003
- type: ndcg_at_100
value: 32.955
- type: ndcg_at_1000
value: 35.985
- type: ndcg_at_3
value: 24.212
- type: ndcg_at_5
value: 26.051000000000002
- type: precision_at_1
value: 20.372
- type: precision_at_10
value: 5.237
- type: precision_at_100
value: 0.8909999999999999
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 11.643
- type: precision_at_5
value: 8.424
- type: recall_at_1
value: 16.524
- type: recall_at_10
value: 37.969
- type: recall_at_100
value: 59.48
- type: recall_at_1000
value: 81.04599999999999
- type: recall_at_3
value: 26.647
- type: recall_at_5
value: 31.558999999999997
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: cqadupstack/unix
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.273000000000003
- type: map_at_10
value: 35.176
- type: map_at_100
value: 36.367
- type: map_at_1000
value: 36.473
- type: map_at_3
value: 32.583
- type: map_at_5
value: 33.977000000000004
- type: mrr_at_1
value: 30.97
- type: mrr_at_10
value: 39.31
- type: mrr_at_100
value: 40.225
- type: mrr_at_1000
value: 40.284
- type: mrr_at_3
value: 37.111
- type: mrr_at_5
value: 38.296
- type: ndcg_at_1
value: 30.97
- type: ndcg_at_10
value: 40.323
- type: ndcg_at_100
value: 45.725
- type: ndcg_at_1000
value: 48.022
- type: ndcg_at_3
value: 35.772
- type: ndcg_at_5
value: 37.741
- type: precision_at_1
value: 30.97
- type: precision_at_10
value: 6.819
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 16.387
- type: precision_at_5
value: 11.437
- type: recall_at_1
value: 26.273000000000003
- type: recall_at_10
value: 51.772
- type: recall_at_100
value: 75.362
- type: recall_at_1000
value: 91.232
- type: recall_at_3
value: 39.172000000000004
- type: recall_at_5
value: 44.147999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: cqadupstack/webmasters
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.326
- type: map_at_10
value: 37.97
- type: map_at_100
value: 39.602
- type: map_at_1000
value: 39.812999999999995
- type: map_at_3
value: 34.838
- type: map_at_5
value: 36.582
- type: mrr_at_1
value: 33.992
- type: mrr_at_10
value: 42.875
- type: mrr_at_100
value: 43.78
- type: mrr_at_1000
value: 43.827
- type: mrr_at_3
value: 40.481
- type: mrr_at_5
value: 41.657
- type: ndcg_at_1
value: 33.992
- type: ndcg_at_10
value: 44.122
- type: ndcg_at_100
value: 49.652
- type: ndcg_at_1000
value: 51.919000000000004
- type: ndcg_at_3
value: 39.285
- type: ndcg_at_5
value: 41.449999999999996
- type: precision_at_1
value: 33.992
- type: precision_at_10
value: 8.32
- type: precision_at_100
value: 1.617
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 18.445
- type: precision_at_5
value: 13.281
- type: recall_at_1
value: 28.326
- type: recall_at_10
value: 55.822
- type: recall_at_100
value: 80.352
- type: recall_at_1000
value: 94.441
- type: recall_at_3
value: 41.704
- type: recall_at_5
value: 47.513
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: cqadupstack/wordpress
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.526
- type: map_at_10
value: 30.206
- type: map_at_100
value: 31.142999999999997
- type: map_at_1000
value: 31.246000000000002
- type: map_at_3
value: 27.807
- type: map_at_5
value: 29.236
- type: mrr_at_1
value: 24.399
- type: mrr_at_10
value: 32.515
- type: mrr_at_100
value: 33.329
- type: mrr_at_1000
value: 33.400999999999996
- type: mrr_at_3
value: 30.159999999999997
- type: mrr_at_5
value: 31.482
- type: ndcg_at_1
value: 24.399
- type: ndcg_at_10
value: 34.806
- type: ndcg_at_100
value: 39.669
- type: ndcg_at_1000
value: 42.234
- type: ndcg_at_3
value: 30.144
- type: ndcg_at_5
value: 32.481
- type: precision_at_1
value: 24.399
- type: precision_at_10
value: 5.453
- type: precision_at_100
value: 0.8410000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.815999999999999
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 22.526
- type: recall_at_10
value: 46.568
- type: recall_at_100
value: 69.56099999999999
- type: recall_at_1000
value: 88.474
- type: recall_at_3
value: 34.205000000000005
- type: recall_at_5
value: 39.885999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.363000000000001
- type: map_at_10
value: 24.101
- type: map_at_100
value: 26.240000000000002
- type: map_at_1000
value: 26.427
- type: map_at_3
value: 20.125
- type: map_at_5
value: 22.128
- type: mrr_at_1
value: 32.182
- type: mrr_at_10
value: 44.711
- type: mrr_at_100
value: 45.523
- type: mrr_at_1000
value: 45.551
- type: mrr_at_3
value: 41.443999999999996
- type: mrr_at_5
value: 43.473
- type: ndcg_at_1
value: 32.182
- type: ndcg_at_10
value: 33.495000000000005
- type: ndcg_at_100
value: 41.192
- type: ndcg_at_1000
value: 44.346000000000004
- type: ndcg_at_3
value: 27.651999999999997
- type: ndcg_at_5
value: 29.634
- type: precision_at_1
value: 32.182
- type: precision_at_10
value: 10.391
- type: precision_at_100
value: 1.8679999999999999
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 20.586
- type: precision_at_5
value: 15.648000000000001
- type: recall_at_1
value: 14.363000000000001
- type: recall_at_10
value: 39.706
- type: recall_at_100
value: 65.763
- type: recall_at_1000
value: 83.296
- type: recall_at_3
value: 25.064999999999998
- type: recall_at_5
value: 31.085
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.698
- type: map_at_10
value: 20.237
- type: map_at_100
value: 28.534
- type: map_at_1000
value: 30.346
- type: map_at_3
value: 14.097999999999999
- type: map_at_5
value: 16.567999999999998
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 76.35
- type: mrr_at_100
value: 76.676
- type: mrr_at_1000
value: 76.68
- type: mrr_at_3
value: 74.792
- type: mrr_at_5
value: 75.717
- type: ndcg_at_1
value: 56.25
- type: ndcg_at_10
value: 43.578
- type: ndcg_at_100
value: 47.928
- type: ndcg_at_1000
value: 55.312
- type: ndcg_at_3
value: 47.744
- type: ndcg_at_5
value: 45.257
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 35.275
- type: precision_at_100
value: 10.985
- type: precision_at_1000
value: 2.235
- type: precision_at_3
value: 52.0
- type: precision_at_5
value: 44.45
- type: recall_at_1
value: 8.698
- type: recall_at_10
value: 26.661
- type: recall_at_100
value: 54.686
- type: recall_at_1000
value: 77.795
- type: recall_at_3
value: 15.536
- type: recall_at_5
value: 19.578
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.385000000000005
- type: f1
value: 43.818784352804165
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 75.399
- type: map_at_10
value: 83.02199999999999
- type: map_at_100
value: 83.204
- type: map_at_1000
value: 83.217
- type: map_at_3
value: 81.86
- type: map_at_5
value: 82.677
- type: mrr_at_1
value: 81.233
- type: mrr_at_10
value: 88.10900000000001
- type: mrr_at_100
value: 88.17099999999999
- type: mrr_at_1000
value: 88.172
- type: mrr_at_3
value: 87.289
- type: mrr_at_5
value: 87.897
- type: ndcg_at_1
value: 81.233
- type: ndcg_at_10
value: 86.80600000000001
- type: ndcg_at_100
value: 87.492
- type: ndcg_at_1000
value: 87.71600000000001
- type: ndcg_at_3
value: 84.975
- type: ndcg_at_5
value: 86.158
- type: precision_at_1
value: 81.233
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.085
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 32.178000000000004
- type: precision_at_5
value: 20.069
- type: recall_at_1
value: 75.399
- type: recall_at_10
value: 93.533
- type: recall_at_100
value: 96.32300000000001
- type: recall_at_1000
value: 97.695
- type: recall_at_3
value: 88.61099999999999
- type: recall_at_5
value: 91.617
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 20.564
- type: map_at_10
value: 33.162000000000006
- type: map_at_100
value: 35.146
- type: map_at_1000
value: 35.32
- type: map_at_3
value: 28.786
- type: map_at_5
value: 31.22
- type: mrr_at_1
value: 40.278000000000006
- type: mrr_at_10
value: 48.577
- type: mrr_at_100
value: 49.385
- type: mrr_at_1000
value: 49.423
- type: mrr_at_3
value: 46.116
- type: mrr_at_5
value: 47.305
- type: ndcg_at_1
value: 40.278000000000006
- type: ndcg_at_10
value: 40.998000000000005
- type: ndcg_at_100
value: 48.329
- type: ndcg_at_1000
value: 51.148
- type: ndcg_at_3
value: 36.852000000000004
- type: ndcg_at_5
value: 38.146
- type: precision_at_1
value: 40.278000000000006
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.9120000000000001
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 24.383
- type: precision_at_5
value: 18.179000000000002
- type: recall_at_1
value: 20.564
- type: recall_at_10
value: 48.327999999999996
- type: recall_at_100
value: 75.89
- type: recall_at_1000
value: 92.826
- type: recall_at_3
value: 33.517
- type: recall_at_5
value: 39.46
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 34.294000000000004
- type: map_at_10
value: 55.435
- type: map_at_100
value: 56.507
- type: map_at_1000
value: 56.57600000000001
- type: map_at_3
value: 51.654999999999994
- type: map_at_5
value: 54.086
- type: mrr_at_1
value: 68.589
- type: mrr_at_10
value: 75.837
- type: mrr_at_100
value: 76.142
- type: mrr_at_1000
value: 76.155
- type: mrr_at_3
value: 74.50099999999999
- type: mrr_at_5
value: 75.339
- type: ndcg_at_1
value: 68.589
- type: ndcg_at_10
value: 63.846000000000004
- type: ndcg_at_100
value: 67.65
- type: ndcg_at_1000
value: 69.015
- type: ndcg_at_3
value: 58.355999999999995
- type: ndcg_at_5
value: 61.489000000000004
- type: precision_at_1
value: 68.589
- type: precision_at_10
value: 13.738
- type: precision_at_100
value: 1.67
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 37.736
- type: precision_at_5
value: 25.11
- type: recall_at_1
value: 34.294000000000004
- type: recall_at_10
value: 68.69
- type: recall_at_100
value: 83.477
- type: recall_at_1000
value: 92.465
- type: recall_at_3
value: 56.604
- type: recall_at_5
value: 62.775000000000006
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 75.332
- type: ap
value: 69.58548013224627
- type: f1
value: 75.19505914957745
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 19.373
- type: map_at_10
value: 31.377
- type: map_at_100
value: 32.635
- type: map_at_1000
value: 32.688
- type: map_at_3
value: 27.337
- type: map_at_5
value: 29.608
- type: mrr_at_1
value: 19.900000000000002
- type: mrr_at_10
value: 31.928
- type: mrr_at_100
value: 33.14
- type: mrr_at_1000
value: 33.184999999999995
- type: mrr_at_3
value: 27.955999999999996
- type: mrr_at_5
value: 30.209999999999997
- type: ndcg_at_1
value: 19.900000000000002
- type: ndcg_at_10
value: 38.324000000000005
- type: ndcg_at_100
value: 44.45
- type: ndcg_at_1000
value: 45.728
- type: ndcg_at_3
value: 30.099999999999998
- type: ndcg_at_5
value: 34.157
- type: precision_at_1
value: 19.900000000000002
- type: precision_at_10
value: 6.246
- type: precision_at_100
value: 0.932
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.937000000000001
- type: precision_at_5
value: 9.817
- type: recall_at_1
value: 19.373
- type: recall_at_10
value: 59.82300000000001
- type: recall_at_100
value: 88.252
- type: recall_at_1000
value: 97.962
- type: recall_at_3
value: 37.480999999999995
- type: recall_at_5
value: 47.215
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.08800729594162
- type: f1
value: 93.6743110282188
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.04742362061104
- type: f1
value: 59.62885599991211
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.58170813718897
- type: f1
value: 73.57458347240402
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.15601882985877
- type: f1
value: 79.08126473478004
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.551020623875196
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.110159113704523
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.960982592404424
- type: mrr
value: 33.106781262600435
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.679
- type: map_at_10
value: 13.922
- type: map_at_100
value: 17.949
- type: map_at_1000
value: 19.573999999999998
- type: map_at_3
value: 10.061
- type: map_at_5
value: 11.931
- type: mrr_at_1
value: 47.678
- type: mrr_at_10
value: 56.701
- type: mrr_at_100
value: 57.221
- type: mrr_at_1000
value: 57.260999999999996
- type: mrr_at_3
value: 54.334
- type: mrr_at_5
value: 55.85099999999999
- type: ndcg_at_1
value: 45.975
- type: ndcg_at_10
value: 37.117
- type: ndcg_at_100
value: 34.633
- type: ndcg_at_1000
value: 43.498
- type: ndcg_at_3
value: 42.475
- type: ndcg_at_5
value: 40.438
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 27.647
- type: precision_at_100
value: 9.08
- type: precision_at_1000
value: 2.218
- type: precision_at_3
value: 39.938
- type: precision_at_5
value: 35.17
- type: recall_at_1
value: 5.679
- type: recall_at_10
value: 18.552
- type: recall_at_100
value: 35.799
- type: recall_at_1000
value: 68.029
- type: recall_at_3
value: 11.43
- type: recall_at_5
value: 14.71
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.055999999999997
- type: map_at_10
value: 45.547
- type: map_at_100
value: 46.591
- type: map_at_1000
value: 46.615
- type: map_at_3
value: 40.81
- type: map_at_5
value: 43.673
- type: mrr_at_1
value: 32.763999999999996
- type: mrr_at_10
value: 47.937999999999995
- type: mrr_at_100
value: 48.691
- type: mrr_at_1000
value: 48.705
- type: mrr_at_3
value: 43.984
- type: mrr_at_5
value: 46.467999999999996
- type: ndcg_at_1
value: 32.763999999999996
- type: ndcg_at_10
value: 53.891999999999996
- type: ndcg_at_100
value: 58.167
- type: ndcg_at_1000
value: 58.67099999999999
- type: ndcg_at_3
value: 45.007999999999996
- type: ndcg_at_5
value: 49.805
- type: precision_at_1
value: 32.763999999999996
- type: precision_at_10
value: 9.186
- type: precision_at_100
value: 1.1560000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 21.012
- type: precision_at_5
value: 15.348
- type: recall_at_1
value: 29.055999999999997
- type: recall_at_10
value: 76.864
- type: recall_at_100
value: 95.254
- type: recall_at_1000
value: 98.914
- type: recall_at_3
value: 53.911
- type: recall_at_5
value: 64.982
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.393
- type: map_at_10
value: 83.408
- type: map_at_100
value: 84.071
- type: map_at_1000
value: 84.086
- type: map_at_3
value: 80.372
- type: map_at_5
value: 82.245
- type: mrr_at_1
value: 80.06
- type: mrr_at_10
value: 86.546
- type: mrr_at_100
value: 86.661
- type: mrr_at_1000
value: 86.66199999999999
- type: mrr_at_3
value: 85.56700000000001
- type: mrr_at_5
value: 86.215
- type: ndcg_at_1
value: 80.07
- type: ndcg_at_10
value: 87.372
- type: ndcg_at_100
value: 88.683
- type: ndcg_at_1000
value: 88.78
- type: ndcg_at_3
value: 84.384
- type: ndcg_at_5
value: 85.978
- type: precision_at_1
value: 80.07
- type: precision_at_10
value: 13.345
- type: precision_at_100
value: 1.5350000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 36.973
- type: precision_at_5
value: 24.334
- type: recall_at_1
value: 69.393
- type: recall_at_10
value: 94.994
- type: recall_at_100
value: 99.523
- type: recall_at_1000
value: 99.97399999999999
- type: recall_at_3
value: 86.459
- type: recall_at_5
value: 90.962
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 53.02365304347829
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.4722130918676
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.233
- type: map_at_10
value: 10.333
- type: map_at_100
value: 12.286
- type: map_at_1000
value: 12.594
- type: map_at_3
value: 7.514
- type: map_at_5
value: 8.774
- type: mrr_at_1
value: 20.9
- type: mrr_at_10
value: 31.232
- type: mrr_at_100
value: 32.287
- type: mrr_at_1000
value: 32.352
- type: mrr_at_3
value: 27.766999999999996
- type: mrr_at_5
value: 29.487000000000002
- type: ndcg_at_1
value: 20.9
- type: ndcg_at_10
value: 17.957
- type: ndcg_at_100
value: 25.526
- type: ndcg_at_1000
value: 31.097
- type: ndcg_at_3
value: 16.915
- type: ndcg_at_5
value: 14.579
- type: precision_at_1
value: 20.9
- type: precision_at_10
value: 9.41
- type: precision_at_100
value: 2.032
- type: precision_at_1000
value: 0.337
- type: precision_at_3
value: 15.767000000000001
- type: precision_at_5
value: 12.659999999999998
- type: recall_at_1
value: 4.233
- type: recall_at_10
value: 19.067999999999998
- type: recall_at_100
value: 41.257
- type: recall_at_1000
value: 68.487
- type: recall_at_3
value: 9.618
- type: recall_at_5
value: 12.853
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 82.25303886615637
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 78.27678362978094
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 85.5228883863618
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 82.48847836687274
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 88.76235312662311
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 87.10893533398001
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 90.10224405448504
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 68.25088774601221
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 87.15751321128134
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 79.23418699664575
- type: mrr
value: 93.72032288698955
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 56.511
- type: map_at_10
value: 67.062
- type: map_at_100
value: 67.537
- type: map_at_1000
value: 67.553
- type: map_at_3
value: 63.375
- type: map_at_5
value: 65.828
- type: mrr_at_1
value: 59.333000000000006
- type: mrr_at_10
value: 67.95
- type: mrr_at_100
value: 68.284
- type: mrr_at_1000
value: 68.30000000000001
- type: mrr_at_3
value: 65.0
- type: mrr_at_5
value: 66.93299999999999
- type: ndcg_at_1
value: 59.333000000000006
- type: ndcg_at_10
value: 72.08099999999999
- type: ndcg_at_100
value: 74.232
- type: ndcg_at_1000
value: 74.657
- type: ndcg_at_3
value: 65.72200000000001
- type: ndcg_at_5
value: 69.395
- type: precision_at_1
value: 59.333000000000006
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 17.533
- type: recall_at_1
value: 56.511
- type: recall_at_10
value: 86.63300000000001
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 70.217
- type: recall_at_5
value: 78.806
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.83861386138614
- type: cos_sim_ap
value: 96.24728474711715
- type: cos_sim_f1
value: 91.76351692774129
- type: cos_sim_precision
value: 92.74770173646579
- type: cos_sim_recall
value: 90.8
- type: dot_accuracy
value: 99.62475247524752
- type: dot_ap
value: 88.12302791709324
- type: dot_f1
value: 81.0187409899087
- type: dot_precision
value: 77.98334875115633
- type: dot_recall
value: 84.3
- type: euclidean_accuracy
value: 99.83465346534653
- type: euclidean_ap
value: 95.79574410387337
- type: euclidean_f1
value: 91.56139464375947
- type: euclidean_precision
value: 92.54341164453524
- type: euclidean_recall
value: 90.60000000000001
- type: manhattan_accuracy
value: 99.84059405940594
- type: manhattan_ap
value: 95.81230332276807
- type: manhattan_f1
value: 91.80661577608143
- type: manhattan_precision
value: 93.47150259067357
- type: manhattan_recall
value: 90.2
- type: max_accuracy
value: 99.84059405940594
- type: max_ap
value: 96.24728474711715
- type: max_f1
value: 91.80661577608143
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 63.035694955649866
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.00935398440242
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.61138657342161
- type: mrr
value: 50.26590749936338
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.994071916424655
- type: cos_sim_spearman
value: 30.010135460886296
- type: dot_pearson
value: 27.03290596322524
- type: dot_spearman
value: 28.824264579690357
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.247
- type: map_at_10
value: 2.01
- type: map_at_100
value: 12.912
- type: map_at_1000
value: 32.35
- type: map_at_3
value: 0.6859999999999999
- type: map_at_5
value: 1.089
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 95.25
- type: mrr_at_100
value: 95.25
- type: mrr_at_1000
value: 95.25
- type: mrr_at_3
value: 95.0
- type: mrr_at_5
value: 95.0
- type: ndcg_at_1
value: 88.0
- type: ndcg_at_10
value: 80.411
- type: ndcg_at_100
value: 63.871
- type: ndcg_at_1000
value: 58.145
- type: ndcg_at_3
value: 84.75399999999999
- type: ndcg_at_5
value: 82.372
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 65.84
- type: precision_at_1000
value: 25.874000000000002
- type: precision_at_3
value: 90.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.247
- type: recall_at_10
value: 2.185
- type: recall_at_100
value: 16.051000000000002
- type: recall_at_1000
value: 55.18300000000001
- type: recall_at_3
value: 0.701
- type: recall_at_5
value: 1.1360000000000001
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.094
- type: map_at_10
value: 9.078
- type: map_at_100
value: 15.152
- type: map_at_1000
value: 16.773
- type: map_at_3
value: 4.67
- type: map_at_5
value: 6.111
- type: mrr_at_1
value: 24.490000000000002
- type: mrr_at_10
value: 39.989000000000004
- type: mrr_at_100
value: 41.248000000000005
- type: mrr_at_1000
value: 41.248000000000005
- type: mrr_at_3
value: 37.075
- type: mrr_at_5
value: 38.503
- type: ndcg_at_1
value: 21.429000000000002
- type: ndcg_at_10
value: 22.312
- type: ndcg_at_100
value: 35.077999999999996
- type: ndcg_at_1000
value: 46.903
- type: ndcg_at_3
value: 24.241
- type: ndcg_at_5
value: 21.884
- type: precision_at_1
value: 24.490000000000002
- type: precision_at_10
value: 20.816000000000003
- type: precision_at_100
value: 7.673000000000001
- type: precision_at_1000
value: 1.569
- type: precision_at_3
value: 27.211000000000002
- type: precision_at_5
value: 22.857
- type: recall_at_1
value: 2.094
- type: recall_at_10
value: 15.546
- type: recall_at_100
value: 47.764
- type: recall_at_1000
value: 84.461
- type: recall_at_3
value: 5.994
- type: recall_at_5
value: 8.967
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.92240000000001
- type: ap
value: 14.16088899225379
- type: f1
value: 54.04609416028299
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.764006791171475
- type: f1
value: 61.06042158638947
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.37015403955057
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.8510460749836
- type: cos_sim_ap
value: 76.13675917697662
- type: cos_sim_f1
value: 69.72121212121213
- type: cos_sim_precision
value: 64.48430493273543
- type: cos_sim_recall
value: 75.8839050131926
- type: dot_accuracy
value: 82.2793109614353
- type: dot_ap
value: 61.68231214221829
- type: dot_f1
value: 59.873802290254716
- type: dot_precision
value: 53.73322147651006
- type: dot_recall
value: 67.59894459102902
- type: euclidean_accuracy
value: 86.78548012159504
- type: euclidean_ap
value: 75.72625794456354
- type: euclidean_f1
value: 70.13506753376687
- type: euclidean_precision
value: 66.66666666666666
- type: euclidean_recall
value: 73.98416886543535
- type: manhattan_accuracy
value: 86.78548012159504
- type: manhattan_ap
value: 75.68264053123454
- type: manhattan_f1
value: 70.11952191235059
- type: manhattan_precision
value: 66.38378123526638
- type: manhattan_recall
value: 74.30079155672823
- type: max_accuracy
value: 86.8510460749836
- type: max_ap
value: 76.13675917697662
- type: max_f1
value: 70.13506753376687
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.20712539294446
- type: cos_sim_ap
value: 86.227146559573
- type: cos_sim_f1
value: 78.8050795036932
- type: cos_sim_precision
value: 74.7085201793722
- type: cos_sim_recall
value: 83.37696335078533
- type: dot_accuracy
value: 86.59525749990297
- type: dot_ap
value: 79.7714972191685
- type: dot_f1
value: 73.45451896105789
- type: dot_precision
value: 69.70891239715135
- type: dot_recall
value: 77.62550046196489
- type: euclidean_accuracy
value: 88.92575775216362
- type: euclidean_ap
value: 85.58942167175054
- type: euclidean_f1
value: 78.03423522915516
- type: euclidean_precision
value: 74.76193835084996
- type: euclidean_recall
value: 81.60609793655682
- type: manhattan_accuracy
value: 88.92769821865176
- type: manhattan_ap
value: 85.58316068024254
- type: manhattan_f1
value: 78.03337843933242
- type: manhattan_precision
value: 76.23384253819037
- type: manhattan_recall
value: 79.91992608561749
- type: max_accuracy
value: 89.20712539294446
- type: max_ap
value: 86.227146559573
- type: max_f1
value: 78.8050795036932
---
# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
> LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance.
- **Repository:** https://github.com/McGill-NLP/llm2vec
- **Paper:** https://arxiv.org/abs/2404.05961
## Installation
```bash
pip install llm2vec
```
## Usage
```python
from llm2vec import LLM2Vec
import torch
from transformers import AutoTokenizer, AutoModel, AutoConfig
from peft import PeftModel
# Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model.
tokenizer = AutoTokenizer.from_pretrained(
"McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp"
)
config = AutoConfig.from_pretrained(
"McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp", trust_remote_code=True
)
model = AutoModel.from_pretrained(
"McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp",
trust_remote_code=True,
config=config,
torch_dtype=torch.bfloat16,
device_map="cuda" if torch.cuda.is_available() else "cpu",
)
model = PeftModel.from_pretrained(
model,
"McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp",
)
model = model.merge_and_unload() # This can take several minutes on cpu
# Loading supervised model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + supervised (LoRA).
model = PeftModel.from_pretrained(
model, "McGill-NLP/LLM2Vec-Sheared-LLaMA-mntp-supervised"
)
# Wrapper for encoding and pooling operations
l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512)
# Encoding queries using instructions
instruction = (
"Given a web search query, retrieve relevant passages that answer the query:"
)
queries = [
[instruction, "how much protein should a female eat"],
[instruction, "summit define"],
]
q_reps = l2v.encode(queries)
# Encoding documents. Instruction are not required for documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
d_reps = l2v.encode(documents)
# Compute cosine similarity
q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1)
d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1)
cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1))
print(cos_sim)
"""
tensor([[0.6500, 0.1291],
[0.0916, 0.4733]])
"""
```
## Questions
If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`). | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Lajavaness/bilingual-embedding-base | Lajavaness | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bilingual",
"feature-extraction",
"sentence-similarity",
"transformers",
"sentence-embedding",
"mteb",
"custom_code",
"arxiv:2010.08240",
"arxiv:1911.02116",
"arxiv:1908.10084",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,719 | 1,731 | 5,585 | 6 | ---
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- sentence-embedding
- mteb
model-index:
- name: bilingual-embedding-base
results:
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 64.71347977413274
- type: v_measures
value:
- 0.6276406100583347
- 0.6380984027637511
- 0.6322659733117469
- 0.652473343931587
- 0.6411502615838207
- type: v_measure
value: 45.56886694296516
- type: v_measures
value:
- 0.4359417286866465
- 0.4218168523895086
- 0.419693469863105
- 0.4981808644314091
- 0.4546120704986696
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 65393d0d7a08a10b4e348135e824f385d420b0fd
metrics:
- type: map
value: 70.45979049191963
- type: mrr
value: 71.61250582558874
- type: nAUC_map_diff1
value: 53.81172404763562
- type: nAUC_map_max
value: 11.344605627946006
- type: nAUC_mrr_diff1
value: 53.73572411920392
- type: nAUC_mrr_max
value: 11.900785644756448
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: fcf295ea64c750f41fadbaa37b9b861558e1bfbd
metrics:
- type: map_at_1
value: 24.956999999999997
- type: map_at_10
value: 35.519
- type: map_at_100
value: 36.602000000000004
- type: map_at_1000
value: 36.667
- type: map_at_20
value: 36.199
- type: map_at_3
value: 32.397999999999996
- type: map_at_5
value: 34.121
- type: mrr_at_1
value: 24.956822107081177
- type: mrr_at_10
value: 35.5187515420676
- type: mrr_at_100
value: 36.60233410773095
- type: mrr_at_1000
value: 36.6673425266803
- type: mrr_at_20
value: 36.19886807640962
- type: mrr_at_3
value: 32.39781232009212
- type: mrr_at_5
value: 34.120610247553294
- type: nauc_map_at_1000_diff1
value: 35.47549954610535
- type: nauc_map_at_1000_max
value: 32.90078190148035
- type: nauc_map_at_100_diff1
value: 35.450977423889455
- type: nauc_map_at_100_max
value: 32.94163322687819
- type: nauc_map_at_10_diff1
value: 35.33565787959566
- type: nauc_map_at_10_max
value: 32.64293321320758
- type: nauc_map_at_1_diff1
value: 41.36063963274824
- type: nauc_map_at_1_max
value: 29.231198539339754
- type: nauc_map_at_20_diff1
value: 35.43536321023612
- type: nauc_map_at_20_max
value: 32.88345868813372
- type: nauc_map_at_3_diff1
value: 35.686505961183194
- type: nauc_map_at_3_max
value: 32.00806639149525
- type: nauc_map_at_5_diff1
value: 35.34083004527267
- type: nauc_map_at_5_max
value: 32.34459863266794
- type: nauc_mrr_at_1000_diff1
value: 35.47549954610535
- type: nauc_mrr_at_1000_max
value: 32.90078190148035
- type: nauc_mrr_at_100_diff1
value: 35.450977423889455
- type: nauc_mrr_at_100_max
value: 32.94163322687819
- type: nauc_mrr_at_10_diff1
value: 35.33565787959566
- type: nauc_mrr_at_10_max
value: 32.64293321320758
- type: nauc_mrr_at_1_diff1
value: 41.36063963274824
- type: nauc_mrr_at_1_max
value: 29.231198539339754
- type: nauc_mrr_at_20_diff1
value: 35.43536321023612
- type: nauc_mrr_at_20_max
value: 32.88345868813372
- type: nauc_mrr_at_3_diff1
value: 35.686505961183194
- type: nauc_mrr_at_3_max
value: 32.00806639149525
- type: nauc_mrr_at_5_diff1
value: 35.34083004527267
- type: nauc_mrr_at_5_max
value: 32.34459863266794
- type: nauc_ndcg_at_1000_diff1
value: 34.4325639393338
- type: nauc_ndcg_at_1000_max
value: 34.44147157978809
- type: nauc_ndcg_at_100_diff1
value: 33.88653913244061
- type: nauc_ndcg_at_100_max
value: 35.59739772150559
- type: nauc_ndcg_at_10_diff1
value: 33.51057326557921
- type: nauc_ndcg_at_10_max
value: 34.18300339583124
- type: nauc_ndcg_at_1_diff1
value: 41.36063963274824
- type: nauc_ndcg_at_1_max
value: 29.231198539339754
- type: nauc_ndcg_at_20_diff1
value: 33.804940889687714
- type: nauc_ndcg_at_20_max
value: 35.13066047975507
- type: nauc_ndcg_at_3_diff1
value: 34.167179830350406
- type: nauc_ndcg_at_3_max
value: 32.94370726673571
- type: nauc_ndcg_at_5_diff1
value: 33.52138730976869
- type: nauc_ndcg_at_5_max
value: 33.48723912009657
- type: nauc_precision_at_1000_diff1
value: 22.67827963096167
- type: nauc_precision_at_1000_max
value: 67.17958434804059
- type: nauc_precision_at_100_diff1
value: 24.83989747177019
- type: nauc_precision_at_100_max
value: 56.86858468981111
- type: nauc_precision_at_10_diff1
value: 27.973507280697774
- type: nauc_precision_at_10_max
value: 39.14247264250278
- type: nauc_precision_at_1_diff1
value: 41.36063963274824
- type: nauc_precision_at_1_max
value: 29.231198539339754
- type: nauc_precision_at_20_diff1
value: 28.244479044808852
- type: nauc_precision_at_20_max
value: 44.5538108461975
- type: nauc_precision_at_3_diff1
value: 30.09786756288918
- type: nauc_precision_at_3_max
value: 35.51252285506707
- type: nauc_precision_at_5_diff1
value: 28.444019525872083
- type: nauc_precision_at_5_max
value: 36.71010320598834
- type: nauc_recall_at_1000_diff1
value: 22.67827963096142
- type: nauc_recall_at_1000_max
value: 67.17958434804105
- type: nauc_recall_at_100_diff1
value: 24.83989747177035
- type: nauc_recall_at_100_max
value: 56.868584689811186
- type: nauc_recall_at_10_diff1
value: 27.973507280697774
- type: nauc_recall_at_10_max
value: 39.14247264250283
- type: nauc_recall_at_1_diff1
value: 41.36063963274824
- type: nauc_recall_at_1_max
value: 29.231198539339754
- type: nauc_recall_at_20_diff1
value: 28.244479044808823
- type: nauc_recall_at_20_max
value: 44.553810846197486
- type: nauc_recall_at_3_diff1
value: 30.097867562889213
- type: nauc_recall_at_3_max
value: 35.51252285506708
- type: nauc_recall_at_5_diff1
value: 28.444019525872054
- type: nauc_recall_at_5_max
value: 36.71010320598833
- type: ndcg_at_1
value: 24.956999999999997
- type: ndcg_at_10
value: 41.260999999999996
- type: ndcg_at_100
value: 46.628
- type: ndcg_at_1000
value: 48.339999999999996
- type: ndcg_at_20
value: 43.71
- type: ndcg_at_3
value: 34.771
- type: ndcg_at_5
value: 37.862
- type: precision_at_1
value: 24.956999999999997
- type: precision_at_10
value: 5.963
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_20
value: 3.463
- type: precision_at_3
value: 13.874
- type: precision_at_5
value: 9.818999999999999
- type: recall_at_1
value: 24.956999999999997
- type: recall_at_10
value: 59.629
- type: recall_at_100
value: 84.974
- type: recall_at_1000
value: 98.402
- type: recall_at_20
value: 69.257
- type: recall_at_3
value: 41.623
- type: recall_at_5
value: 49.092999999999996
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.354
- type: f1
value: 38.96127209677864
- type: f1_weighted
value: 38.96127209677863
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 5.405
- type: map_at_10
value: 9.715
- type: map_at_100
value: 10.852
- type: map_at_1000
value: 10.986
- type: map_at_20
value: 10.324
- type: map_at_3
value: 8.183
- type: map_at_5
value: 9.129
- type: mrr_at_1
value: 5.405405405405405
- type: mrr_at_10
value: 9.715250965250963
- type: mrr_at_100
value: 10.852488713991455
- type: mrr_at_1000
value: 10.986128810251913
- type: mrr_at_20
value: 10.324372731493472
- type: mrr_at_3
value: 8.183183183183184
- type: mrr_at_5
value: 9.12912912912913
- type: nauc_map_at_1000_diff1
value: 27.989147369852507
- type: nauc_map_at_1000_max
value: 13.956676274011212
- type: nauc_map_at_100_diff1
value: 27.97931691839789
- type: nauc_map_at_100_max
value: 13.885054931557967
- type: nauc_map_at_10_diff1
value: 28.39484917161119
- type: nauc_map_at_10_max
value: 12.859153645903026
- type: nauc_map_at_1_diff1
value: 40.37326707123
- type: nauc_map_at_1_max
value: 19.53225223707901
- type: nauc_map_at_20_diff1
value: 27.969553431914463
- type: nauc_map_at_20_max
value: 13.993555023169318
- type: nauc_map_at_3_diff1
value: 31.130088078198526
- type: nauc_map_at_3_max
value: 11.59332560179606
- type: nauc_map_at_5_diff1
value: 28.51357198186801
- type: nauc_map_at_5_max
value: 12.629395513278464
- type: nauc_mrr_at_1000_diff1
value: 27.989147369852507
- type: nauc_mrr_at_1000_max
value: 13.956676274011212
- type: nauc_mrr_at_100_diff1
value: 27.97931691839789
- type: nauc_mrr_at_100_max
value: 13.885054931557967
- type: nauc_mrr_at_10_diff1
value: 28.39484917161119
- type: nauc_mrr_at_10_max
value: 12.859153645903026
- type: nauc_mrr_at_1_diff1
value: 40.37326707123
- type: nauc_mrr_at_1_max
value: 19.53225223707901
- type: nauc_mrr_at_20_diff1
value: 27.969553431914463
- type: nauc_mrr_at_20_max
value: 13.993555023169318
- type: nauc_mrr_at_3_diff1
value: 31.130088078198526
- type: nauc_mrr_at_3_max
value: 11.59332560179606
- type: nauc_mrr_at_5_diff1
value: 28.51357198186801
- type: nauc_mrr_at_5_max
value: 12.629395513278464
- type: nauc_ndcg_at_1000_diff1
value: 23.808642111518818
- type: nauc_ndcg_at_1000_max
value: 16.627566094152723
- type: nauc_ndcg_at_100_diff1
value: 24.15753828838301
- type: nauc_ndcg_at_100_max
value: 15.687219543962943
- type: nauc_ndcg_at_10_diff1
value: 25.37534871155007
- type: nauc_ndcg_at_10_max
value: 12.129150838266701
- type: nauc_ndcg_at_1_diff1
value: 40.37326707123
- type: nauc_ndcg_at_1_max
value: 19.53225223707901
- type: nauc_ndcg_at_20_diff1
value: 24.446964236295017
- type: nauc_ndcg_at_20_max
value: 15.718345074121808
- type: nauc_ndcg_at_3_diff1
value: 29.121862169292296
- type: nauc_ndcg_at_3_max
value: 9.481317109332187
- type: nauc_ndcg_at_5_diff1
value: 25.17815567035254
- type: nauc_ndcg_at_5_max
value: 11.187689974665869
- type: nauc_precision_at_1000_diff1
value: 10.131813130975075
- type: nauc_precision_at_1000_max
value: 30.42776840947068
- type: nauc_precision_at_100_diff1
value: 17.8621079715631
- type: nauc_precision_at_100_max
value: 20.81364427037172
- type: nauc_precision_at_10_diff1
value: 20.451314767316635
- type: nauc_precision_at_10_max
value: 11.400840293532708
- type: nauc_precision_at_1_diff1
value: 40.37326707123
- type: nauc_precision_at_1_max
value: 19.53225223707901
- type: nauc_precision_at_20_diff1
value: 19.17170129809007
- type: nauc_precision_at_20_max
value: 20.190489899791007
- type: nauc_precision_at_3_diff1
value: 24.905337103765735
- type: nauc_precision_at_3_max
value: 4.960457155777402
- type: nauc_precision_at_5_diff1
value: 18.725510703139488
- type: nauc_precision_at_5_max
value: 8.555964364751343
- type: nauc_recall_at_1000_diff1
value: 10.131813130975143
- type: nauc_recall_at_1000_max
value: 30.427768409470673
- type: nauc_recall_at_100_diff1
value: 17.862107971563105
- type: nauc_recall_at_100_max
value: 20.813644270371707
- type: nauc_recall_at_10_diff1
value: 20.45131476731657
- type: nauc_recall_at_10_max
value: 11.400840293532651
- type: nauc_recall_at_1_diff1
value: 40.37326707123
- type: nauc_recall_at_1_max
value: 19.53225223707901
- type: nauc_recall_at_20_diff1
value: 19.171701298090017
- type: nauc_recall_at_20_max
value: 20.19048989979099
- type: nauc_recall_at_3_diff1
value: 24.905337103765717
- type: nauc_recall_at_3_max
value: 4.9604571557773935
- type: nauc_recall_at_5_diff1
value: 18.72551070313952
- type: nauc_recall_at_5_max
value: 8.55596436475138
- type: ndcg_at_1
value: 5.405
- type: ndcg_at_10
value: 12.217
- type: ndcg_at_100
value: 18.512999999999998
- type: ndcg_at_1000
value: 22.002
- type: ndcg_at_20
value: 14.551
- type: ndcg_at_3
value: 9.089
- type: ndcg_at_5
value: 10.776
- type: precision_at_1
value: 5.405
- type: precision_at_10
value: 2.027
- type: precision_at_100
value: 0.514
- type: precision_at_1000
value: 0.079
- type: precision_at_20
value: 1.486
- type: precision_at_3
value: 3.904
- type: precision_at_5
value: 3.1530000000000005
- type: recall_at_1
value: 5.405
- type: recall_at_10
value: 20.27
- type: recall_at_100
value: 51.351
- type: recall_at_1000
value: 78.82900000000001
- type: recall_at_20
value: 29.73
- type: recall_at_3
value: 11.712
- type: recall_at_5
value: 15.766
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 25.366610995664384
- type: v_measures
value:
- 0.2853826102888054
- 0.27554329622230517
- 0.2659387504290534
- 0.272421074779971
- 0.23780511730712292
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: reciTAL/mlsum
config: fr
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 44.22735442638259
- type: v_measures
value:
- 0.446401643334608
- 0.45940182243882194
- 0.4535014442023603
- 0.4315840671354274
- 0.40219963696623473
- type: v_measure
value: 44.57521454657146
- type: v_measures
value:
- 0.4501823523199927
- 0.45800459800343646
- 0.4535621236055794
- 0.4353844290212204
- 0.40539074932285035
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.82743501409333
- type: f1
value: 87.07777165714171
- type: f1_weighted
value: 86.70555382175719
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 59.129345443156886
- type: f1
value: 40.374753319633946
- type: f1_weighted
value: 61.735222244513906
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: mteb/masakhanews
config: fra
split: test
revision: 18193f187b92da67168c655c9973a165ed9593dd
metrics:
- type: accuracy
value: 77.44075829383887
- type: f1
value: 73.59099795975669
- type: f1_weighted
value: 77.53638597300765
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 61.57579457743856
- type: v_measures
value:
- 1.0
- 0.19333147584460073
- 0.1918473189330316
- 0.9180419176341081
- 0.7755690164601873
- type: v_measure
value: 46.09882977546196
- type: v_measures
value:
- 1.0
- 0.0
- 0.4881603944307428
- 0.1341399033659235
- 0.6826411909764316
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 64.98655010087424
- type: f1
value: 62.07892690857404
- type: f1_weighted
value: 64.04916798028313
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 71.72494956287828
- type: f1
value: 70.7613627592262
- type: f1_weighted
value: 71.59023734198762
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 15.643
- type: map_at_10
value: 23.263
- type: map_at_100
value: 24.282
- type: map_at_1000
value: 24.386
- type: map_at_20
value: 23.810000000000002
- type: map_at_3
value: 20.857
- type: map_at_5
value: 22.127
- type: mrr_at_1
value: 15.642915642915641
- type: mrr_at_10
value: 23.26313326313326
- type: mrr_at_100
value: 24.28242740941141
- type: mrr_at_1000
value: 24.38579869502159
- type: mrr_at_20
value: 23.809503335080308
- type: mrr_at_3
value: 20.857220857220817
- type: mrr_at_5
value: 22.126672126672087
- type: nauc_map_at_1000_diff1
value: 22.791507278613405
- type: nauc_map_at_1000_max
value: 34.49738411163668
- type: nauc_map_at_100_diff1
value: 22.753348275392053
- type: nauc_map_at_100_max
value: 34.503329867035156
- type: nauc_map_at_10_diff1
value: 22.949396903228543
- type: nauc_map_at_10_max
value: 34.59635858547973
- type: nauc_map_at_1_diff1
value: 29.68671981414948
- type: nauc_map_at_1_max
value: 32.40476256531568
- type: nauc_map_at_20_diff1
value: 22.839943329322214
- type: nauc_map_at_20_max
value: 34.5512812666387
- type: nauc_map_at_3_diff1
value: 24.531567517208703
- type: nauc_map_at_3_max
value: 34.63194411307717
- type: nauc_map_at_5_diff1
value: 23.785928597532532
- type: nauc_map_at_5_max
value: 34.49711117096583
- type: nauc_mrr_at_1000_diff1
value: 22.791507278613405
- type: nauc_mrr_at_1000_max
value: 34.49738411163668
- type: nauc_mrr_at_100_diff1
value: 22.753348275392053
- type: nauc_mrr_at_100_max
value: 34.503329867035156
- type: nauc_mrr_at_10_diff1
value: 22.949396903228543
- type: nauc_mrr_at_10_max
value: 34.59635858547973
- type: nauc_mrr_at_1_diff1
value: 29.68671981414948
- type: nauc_mrr_at_1_max
value: 32.40476256531568
- type: nauc_mrr_at_20_diff1
value: 22.839943329322214
- type: nauc_mrr_at_20_max
value: 34.5512812666387
- type: nauc_mrr_at_3_diff1
value: 24.531567517208703
- type: nauc_mrr_at_3_max
value: 34.63194411307717
- type: nauc_mrr_at_5_diff1
value: 23.785928597532532
- type: nauc_mrr_at_5_max
value: 34.49711117096583
- type: nauc_ndcg_at_1000_diff1
value: 20.36281863938966
- type: nauc_ndcg_at_1000_max
value: 34.39195365895218
- type: nauc_ndcg_at_100_diff1
value: 19.226093736031558
- type: nauc_ndcg_at_100_max
value: 34.598730272263424
- type: nauc_ndcg_at_10_diff1
value: 20.175947236861635
- type: nauc_ndcg_at_10_max
value: 35.06289054957336
- type: nauc_ndcg_at_1_diff1
value: 29.68671981414948
- type: nauc_ndcg_at_1_max
value: 32.40476256531568
- type: nauc_ndcg_at_20_diff1
value: 19.76832604541867
- type: nauc_ndcg_at_20_max
value: 34.88676463098282
- type: nauc_ndcg_at_3_diff1
value: 23.165684344826936
- type: nauc_ndcg_at_3_max
value: 35.058555779606806
- type: nauc_ndcg_at_5_diff1
value: 22.008327776501197
- type: nauc_ndcg_at_5_max
value: 34.81586957799795
- type: nauc_precision_at_1000_diff1
value: -0.17116637411109903
- type: nauc_precision_at_1000_max
value: 22.79067675552558
- type: nauc_precision_at_100_diff1
value: 6.442518254575118
- type: nauc_precision_at_100_max
value: 34.22531735083736
- type: nauc_precision_at_10_diff1
value: 13.139149147711764
- type: nauc_precision_at_10_max
value: 36.17629609592965
- type: nauc_precision_at_1_diff1
value: 29.68671981414948
- type: nauc_precision_at_1_max
value: 32.40476256531568
- type: nauc_precision_at_20_diff1
value: 11.54118462661454
- type: nauc_precision_at_20_max
value: 35.56967988819633
- type: nauc_precision_at_3_diff1
value: 19.81196490555276
- type: nauc_precision_at_3_max
value: 36.06171743276854
- type: nauc_precision_at_5_diff1
value: 17.72477129059423
- type: nauc_precision_at_5_max
value: 35.49631000776651
- type: nauc_recall_at_1000_diff1
value: -0.17116637411164665
- type: nauc_recall_at_1000_max
value: 22.790676755525375
- type: nauc_recall_at_100_diff1
value: 6.442518254575061
- type: nauc_recall_at_100_max
value: 34.22531735083735
- type: nauc_recall_at_10_diff1
value: 13.139149147711779
- type: nauc_recall_at_10_max
value: 36.17629609592963
- type: nauc_recall_at_1_diff1
value: 29.68671981414948
- type: nauc_recall_at_1_max
value: 32.40476256531568
- type: nauc_recall_at_20_diff1
value: 11.541184626614548
- type: nauc_recall_at_20_max
value: 35.56967988819635
- type: nauc_recall_at_3_diff1
value: 19.811964905552767
- type: nauc_recall_at_3_max
value: 36.06171743276855
- type: nauc_recall_at_5_diff1
value: 17.72477129059422
- type: nauc_recall_at_5_max
value: 35.49631000776651
- type: ndcg_at_1
value: 15.643
- type: ndcg_at_10
value: 27.596999999999998
- type: ndcg_at_100
value: 33.036
- type: ndcg_at_1000
value: 36.348
- type: ndcg_at_20
value: 29.586000000000002
- type: ndcg_at_3
value: 22.567
- type: ndcg_at_5
value: 24.858
- type: precision_at_1
value: 15.643
- type: precision_at_10
value: 4.152
- type: precision_at_100
value: 0.681
- type: precision_at_1000
value: 0.095
- type: precision_at_20
value: 2.469
- type: precision_at_3
value: 9.173
- type: precision_at_5
value: 6.618
- type: recall_at_1
value: 15.643
- type: recall_at_10
value: 41.522999999999996
- type: recall_at_100
value: 68.059
- type: recall_at_1000
value: 95.332
- type: recall_at_20
value: 49.386
- type: recall_at_3
value: 27.517999999999997
- type: recall_at_5
value: 33.088
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 82.08446866485014
- type: cos_sim_ap
value: 93.63779278112501
- type: cos_sim_f1
value: 87.31307284129282
- type: cos_sim_precision
value: 84.8968105065666
- type: cos_sim_recall
value: 89.87090367428004
- type: dot_accuracy
value: 82.08446866485014
- type: dot_ap
value: 93.63779278112501
- type: dot_f1
value: 87.31307284129282
- type: dot_precision
value: 84.8968105065666
- type: dot_recall
value: 89.87090367428004
- type: euclidean_accuracy
value: 82.08446866485014
- type: euclidean_ap
value: 93.63779278112501
- type: euclidean_f1
value: 87.31307284129282
- type: euclidean_precision
value: 84.8968105065666
- type: euclidean_recall
value: 89.87090367428004
- type: manhattan_accuracy
value: 82.15258855585832
- type: manhattan_ap
value: 93.6291276133462
- type: manhattan_f1
value: 87.36740597878496
- type: manhattan_precision
value: 84.91096532333646
- type: manhattan_recall
value: 89.97020854021847
- type: max_accuracy
value: 82.15258855585832
- type: max_ap
value: 93.63779278112501
- type: max_f1
value: 87.36740597878496
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: google-research-datasets/paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 60.85
- type: cos_sim_ap
value: 60.37878110577144
- type: cos_sim_f1
value: 62.67978803936413
- type: cos_sim_precision
value: 47.61357101782634
- type: cos_sim_recall
value: 91.69435215946844
- type: dot_accuracy
value: 60.85
- type: dot_ap
value: 60.282876235548
- type: dot_f1
value: 62.67978803936413
- type: dot_precision
value: 47.61357101782634
- type: dot_recall
value: 91.69435215946844
- type: euclidean_accuracy
value: 60.85
- type: euclidean_ap
value: 60.37878110577144
- type: euclidean_f1
value: 62.67978803936413
- type: euclidean_precision
value: 47.61357101782634
- type: euclidean_recall
value: 91.69435215946844
- type: manhattan_accuracy
value: 61.0
- type: manhattan_ap
value: 60.32592616688752
- type: manhattan_f1
value: 62.70871985157699
- type: manhattan_precision
value: 47.154017857142854
- type: manhattan_recall
value: 93.57696566998892
- type: max_accuracy
value: 61.0
- type: max_ap
value: 60.37878110577144
- type: max_f1
value: 62.70871985157699
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 82.41331853949572
- type: cos_sim_spearman
value: 75.4815169821101
- type: euclidean_pearson
value: 79.81380205322172
- type: euclidean_spearman
value: 75.48076002322844
- type: manhattan_pearson
value: 79.86035354304643
- type: manhattan_spearman
value: 75.51770448845875
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cos_sim_pearson
value: 78.63723398201921
- type: cos_sim_spearman
value: 81.1413239539243
- type: euclidean_pearson
value: 79.65250044199216
- type: euclidean_spearman
value: 81.1413239539243
- type: manhattan_pearson
value: 79.4690995487022
- type: manhattan_spearman
value: 80.61411938663267
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: mteb/stsb_multi_mt
config: fr
split: test
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
metrics:
- type: cos_sim_pearson
value: 83.34057641357742
- type: cos_sim_spearman
value: 83.74771008807434
- type: euclidean_pearson
value: 83.22245971062527
- type: euclidean_spearman
value: 83.74894925253341
- type: manhattan_pearson
value: 83.06789257151542
- type: manhattan_spearman
value: 83.52796149940158
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 29.949370780736455
- type: cos_sim_spearman
value: 30.16472987232583
- type: dot_pearson
value: 29.94937008799093
- type: dot_spearman
value: 30.16472987232583
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: daf0863838cd9e3ba50544cdce3ac2b338a1b0ad
metrics:
- type: map
value: 83.82619047619046
- type: mrr
value: 83.82619047619046
- type: nAUC_map_diff1
value: 56.291594426865686
- type: nAUC_map_max
value: 9.006252496368798
- type: nAUC_mrr_diff1
value: 56.291594426865686
- type: nAUC_mrr_max
value: 9.006252496368798
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 19661ccdca4dfc2d15122d776b61685f48c68ca9
metrics:
- type: map_at_1
value: 62.0
- type: map_at_10
value: 74.369
- type: map_at_100
value: 74.626
- type: map_at_1000
value: 74.626
- type: map_at_20
value: 74.626
- type: map_at_3
value: 72.333
- type: map_at_5
value: 73.533
- type: mrr_at_1
value: 62.0
- type: mrr_at_10
value: 74.36944444444444
- type: mrr_at_100
value: 74.62553418803418
- type: mrr_at_1000
value: 74.62553418803418
- type: mrr_at_20
value: 74.62553418803418
- type: mrr_at_3
value: 72.33333333333333
- type: mrr_at_5
value: 73.53333333333333
- type: nauc_map_at_1000_diff1
value: 59.84829658893367
- type: nauc_map_at_1000_max
value: 25.92680320484747
- type: nauc_map_at_100_diff1
value: 59.84829658893367
- type: nauc_map_at_100_max
value: 25.92680320484747
- type: nauc_map_at_10_diff1
value: 60.14734510056334
- type: nauc_map_at_10_max
value: 26.4859845903765
- type: nauc_map_at_1_diff1
value: 61.15589330215675
- type: nauc_map_at_1_max
value: 25.502261329723385
- type: nauc_map_at_20_diff1
value: 59.84829658893367
- type: nauc_map_at_20_max
value: 25.92680320484747
- type: nauc_map_at_3_diff1
value: 60.664125602445864
- type: nauc_map_at_3_max
value: 24.598480701707597
- type: nauc_map_at_5_diff1
value: 59.96933672856163
- type: nauc_map_at_5_max
value: 26.87050847362874
- type: nauc_mrr_at_1000_diff1
value: 59.84829658893367
- type: nauc_mrr_at_1000_max
value: 25.92680320484747
- type: nauc_mrr_at_100_diff1
value: 59.84829658893367
- type: nauc_mrr_at_100_max
value: 25.92680320484747
- type: nauc_mrr_at_10_diff1
value: 60.14734510056334
- type: nauc_mrr_at_10_max
value: 26.4859845903765
- type: nauc_mrr_at_1_diff1
value: 61.15589330215675
- type: nauc_mrr_at_1_max
value: 25.502261329723385
- type: nauc_mrr_at_20_diff1
value: 59.84829658893367
- type: nauc_mrr_at_20_max
value: 25.92680320484747
- type: nauc_mrr_at_3_diff1
value: 60.664125602445864
- type: nauc_mrr_at_3_max
value: 24.598480701707597
- type: nauc_mrr_at_5_diff1
value: 59.96933672856163
- type: nauc_mrr_at_5_max
value: 26.87050847362874
- type: nauc_ndcg_at_1000_diff1
value: 60.04965149586935
- type: nauc_ndcg_at_1000_max
value: 26.34908378184259
- type: nauc_ndcg_at_100_diff1
value: 60.04965149586935
- type: nauc_ndcg_at_100_max
value: 26.34908378184259
- type: nauc_ndcg_at_10_diff1
value: 61.517946155950945
- type: nauc_ndcg_at_10_max
value: 29.005286712766882
- type: nauc_ndcg_at_1_diff1
value: 61.15589330215675
- type: nauc_ndcg_at_1_max
value: 25.502261329723385
- type: nauc_ndcg_at_20_diff1
value: 60.04965149586935
- type: nauc_ndcg_at_20_max
value: 26.34908378184259
- type: nauc_ndcg_at_3_diff1
value: 62.138907107716314
- type: nauc_ndcg_at_3_max
value: 24.66905359423539
- type: nauc_ndcg_at_5_diff1
value: 60.81453858060155
- type: nauc_ndcg_at_5_max
value: 29.652166992041785
- type: nauc_precision_at_1000_diff1
value: nan
- type: nauc_precision_at_1000_max
value: nan
- type: nauc_precision_at_100_diff1
value: nan
- type: nauc_precision_at_100_max
value: nan
- type: nauc_precision_at_10_diff1
value: 86.11111111111084
- type: nauc_precision_at_10_max
value: 72.80578898225937
- type: nauc_precision_at_1_diff1
value: 61.15589330215675
- type: nauc_precision_at_1_max
value: 25.502261329723385
- type: nauc_precision_at_20_diff1
value: 100.0
- type: nauc_precision_at_20_max
value: 100.0
- type: nauc_precision_at_3_diff1
value: 69.83729254799863
- type: nauc_precision_at_3_max
value: 25.2229092092417
- type: nauc_precision_at_5_diff1
value: 66.95144724556499
- type: nauc_precision_at_5_max
value: 49.640522875817005
- type: nauc_recall_at_1000_diff1
value: nan
- type: nauc_recall_at_1000_max
value: nan
- type: nauc_recall_at_100_diff1
value: nan
- type: nauc_recall_at_100_max
value: nan
- type: nauc_recall_at_10_diff1
value: 86.11111111111124
- type: nauc_recall_at_10_max
value: 72.8057889822595
- type: nauc_recall_at_1_diff1
value: 61.15589330215675
- type: nauc_recall_at_1_max
value: 25.502261329723385
- type: nauc_recall_at_20_diff1
value: nan
- type: nauc_recall_at_20_max
value: nan
- type: nauc_recall_at_3_diff1
value: 69.83729254799871
- type: nauc_recall_at_3_max
value: 25.22290920924175
- type: nauc_recall_at_5_diff1
value: 66.95144724556485
- type: nauc_recall_at_5_max
value: 49.64052287581686
- type: ndcg_at_1
value: 62.0
- type: ndcg_at_10
value: 79.679
- type: ndcg_at_100
value: 80.664
- type: ndcg_at_1000
value: 80.664
- type: ndcg_at_20
value: 80.664
- type: ndcg_at_3
value: 75.595
- type: ndcg_at_5
value: 77.704
- type: precision_at_1
value: 62.0
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 5.0
- type: precision_at_3
value: 28.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 62.0
- type: recall_at_10
value: 96.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 100.0
- type: recall_at_3
value: 85.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 37.597
- type: map_at_10
value: 57.888
- type: map_at_100
value: 59.277
- type: map_at_1000
value: 59.352000000000004
- type: map_at_20
value: 58.717
- type: map_at_3
value: 51.957
- type: map_at_5
value: 55.772999999999996
- type: mrr_at_1
value: 59.813084112149525
- type: mrr_at_10
value: 67.10031364570752
- type: mrr_at_100
value: 67.66545041236235
- type: mrr_at_1000
value: 67.6835494818551
- type: mrr_at_20
value: 67.49393600922676
- type: mrr_at_3
value: 65.17578994214506
- type: mrr_at_5
value: 66.18380062305292
- type: nauc_map_at_1000_diff1
value: 48.73204134706741
- type: nauc_map_at_1000_max
value: 48.167144249595445
- type: nauc_map_at_100_diff1
value: 48.678967361326784
- type: nauc_map_at_100_max
value: 48.150876586146055
- type: nauc_map_at_10_diff1
value: 48.49819455901337
- type: nauc_map_at_10_max
value: 47.419803565618324
- type: nauc_map_at_1_diff1
value: 58.76698174343172
- type: nauc_map_at_1_max
value: 32.18211527608151
- type: nauc_map_at_20_diff1
value: 48.40197639399712
- type: nauc_map_at_20_max
value: 47.81717012073508
- type: nauc_map_at_3_diff1
value: 50.73932443832007
- type: nauc_map_at_3_max
value: 42.24066337784361
- type: nauc_map_at_5_diff1
value: 49.46975560894318
- type: nauc_map_at_5_max
value: 45.80443987601635
- type: nauc_mrr_at_1000_diff1
value: 57.835936362591525
- type: nauc_mrr_at_1000_max
value: 58.573550643855896
- type: nauc_mrr_at_100_diff1
value: 57.82678446452145
- type: nauc_mrr_at_100_max
value: 58.5764861850602
- type: nauc_mrr_at_10_diff1
value: 57.76879895710224
- type: nauc_mrr_at_10_max
value: 58.64968053975452
- type: nauc_mrr_at_1_diff1
value: 59.856533986276574
- type: nauc_mrr_at_1_max
value: 58.730867813105725
- type: nauc_mrr_at_20_diff1
value: 57.795349563018746
- type: nauc_mrr_at_20_max
value: 58.516079879105256
- type: nauc_mrr_at_3_diff1
value: 57.76489841294852
- type: nauc_mrr_at_3_max
value: 58.58658769928032
- type: nauc_mrr_at_5_diff1
value: 57.694777424382515
- type: nauc_mrr_at_5_max
value: 58.56316065282314
- type: nauc_ndcg_at_1000_diff1
value: 50.69160796479768
- type: nauc_ndcg_at_1000_max
value: 52.199577971960785
- type: nauc_ndcg_at_100_diff1
value: 49.86099689038677
- type: nauc_ndcg_at_100_max
value: 52.151790872414125
- type: nauc_ndcg_at_10_diff1
value: 48.72866627696869
- type: nauc_ndcg_at_10_max
value: 50.06953156417251
- type: nauc_ndcg_at_1_diff1
value: 59.856533986276574
- type: nauc_ndcg_at_1_max
value: 58.730867813105725
- type: nauc_ndcg_at_20_diff1
value: 48.588995750275565
- type: nauc_ndcg_at_20_max
value: 50.49019072586609
- type: nauc_ndcg_at_3_diff1
value: 50.45588351336764
- type: nauc_ndcg_at_3_max
value: 48.984274202014916
- type: nauc_ndcg_at_5_diff1
value: 50.006960016438505
- type: nauc_ndcg_at_5_max
value: 48.303806275166735
- type: nauc_precision_at_1000_diff1
value: -16.164680055782775
- type: nauc_precision_at_1000_max
value: 16.124503094722208
- type: nauc_precision_at_100_diff1
value: -13.36404724754223
- type: nauc_precision_at_100_max
value: 23.29679787821716
- type: nauc_precision_at_10_diff1
value: -3.7402851742774788
- type: nauc_precision_at_10_max
value: 32.14138887961609
- type: nauc_precision_at_1_diff1
value: 59.856533986276574
- type: nauc_precision_at_1_max
value: 58.730867813105725
- type: nauc_precision_at_20_diff1
value: -8.971829372748005
- type: nauc_precision_at_20_max
value: 27.87664599166348
- type: nauc_precision_at_3_diff1
value: 11.427083566719917
- type: nauc_precision_at_3_max
value: 39.94075723264166
- type: nauc_precision_at_5_diff1
value: 3.5428167965724233
- type: nauc_precision_at_5_max
value: 35.65970898316591
- type: nauc_recall_at_1000_diff1
value: 42.34179228230109
- type: nauc_recall_at_1000_max
value: 69.84746848920715
- type: nauc_recall_at_100_diff1
value: 30.47165287340051
- type: nauc_recall_at_100_max
value: 47.90447689942959
- type: nauc_recall_at_10_diff1
value: 36.65046615294308
- type: nauc_recall_at_10_max
value: 41.81737771961874
- type: nauc_recall_at_1_diff1
value: 58.76698174343172
- type: nauc_recall_at_1_max
value: 32.18211527608151
- type: nauc_recall_at_20_diff1
value: 33.33461286517975
- type: nauc_recall_at_20_max
value: 40.033064434150155
- type: nauc_recall_at_3_diff1
value: 45.94521561906703
- type: nauc_recall_at_3_max
value: 37.46948921295656
- type: nauc_recall_at_5_diff1
value: 42.66425368847329
- type: nauc_recall_at_5_max
value: 40.64657773118315
- type: ndcg_at_1
value: 59.813
- type: ndcg_at_10
value: 64.208
- type: ndcg_at_100
value: 69.002
- type: ndcg_at_1000
value: 70.23700000000001
- type: ndcg_at_20
value: 66.29899999999999
- type: ndcg_at_3
value: 59.099999999999994
- type: ndcg_at_5
value: 60.763999999999996
- type: precision_at_1
value: 59.813
- type: precision_at_10
value: 14.766000000000002
- type: precision_at_100
value: 1.8870000000000002
- type: precision_at_1000
value: 0.20600000000000002
- type: precision_at_20
value: 8.117
- type: precision_at_3
value: 35.781
- type: precision_at_5
value: 25.688
- type: recall_at_1
value: 37.597
- type: recall_at_10
value: 72.919
- type: recall_at_100
value: 91.526
- type: recall_at_1000
value: 99.421
- type: recall_at_20
value: 79.64
- type: recall_at_3
value: 56.836
- type: recall_at_5
value: 64.364
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.41791044776119
- type: ap
value: 41.50313649654625
- type: ap_weighted
value: 41.50313649654625
- type: f1
value: 71.69242302886543
- type: f1_weighted
value: 79.4051024757404
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 89.51284999999999
- type: ap
value: 85.78657792017795
- type: ap_weighted
value: 85.78657792017795
- type: f1
value: 89.48680521118494
- type: f1_weighted
value: 89.48680521118494
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.096000000000004
- type: f1
value: 44.75176024696358
- type: f1_weighted
value: 44.75176024696358
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 27.738000000000003
- type: map_at_10
value: 42.992999999999995
- type: map_at_100
value: 44.025999999999996
- type: map_at_1000
value: 44.037
- type: map_at_20
value: 43.838
- type: map_at_3
value: 37.707
- type: map_at_5
value: 40.851
- type: mrr_at_1
value: 28.02275960170697
- type: mrr_at_10
value: 43.129445234708385
- type: mrr_at_100
value: 44.14842568065762
- type: mrr_at_1000
value: 44.15944224906741
- type: mrr_at_20
value: 43.9604803125253
- type: mrr_at_3
value: 37.79042200094829
- type: mrr_at_5
value: 40.937648174490285
- type: nauc_map_at_1000_diff1
value: 7.819134326851951
- type: nauc_map_at_1000_max
value: -5.363208805959698
- type: nauc_map_at_100_diff1
value: 7.827581788081036
- type: nauc_map_at_100_max
value: -5.343758430445495
- type: nauc_map_at_10_diff1
value: 7.788779237579437
- type: nauc_map_at_10_max
value: -5.212502829074271
- type: nauc_map_at_1_diff1
value: 8.676582434483777
- type: nauc_map_at_1_max
value: -9.028400174777206
- type: nauc_map_at_20_diff1
value: 7.843647300531639
- type: nauc_map_at_20_max
value: -5.171628909368918
- type: nauc_map_at_3_diff1
value: 7.580280985257144
- type: nauc_map_at_3_max
value: -6.508188744493309
- type: nauc_map_at_5_diff1
value: 7.461255107228164
- type: nauc_map_at_5_max
value: -5.150343094282473
- type: nauc_mrr_at_1000_diff1
value: 6.911455227219296
- type: nauc_mrr_at_1000_max
value: -5.866891482528428
- type: nauc_mrr_at_100_diff1
value: 6.920215930616841
- type: nauc_mrr_at_100_max
value: -5.847302915503507
- type: nauc_mrr_at_10_diff1
value: 6.89637694159382
- type: nauc_mrr_at_10_max
value: -5.7267515522713985
- type: nauc_mrr_at_1_diff1
value: 7.784148492423344
- type: nauc_mrr_at_1_max
value: -9.261177003486702
- type: nauc_mrr_at_20_diff1
value: 6.941892867312857
- type: nauc_mrr_at_20_max
value: -5.672366841914248
- type: nauc_mrr_at_3_diff1
value: 6.655156176778528
- type: nauc_mrr_at_3_max
value: -7.08130312539927
- type: nauc_mrr_at_5_diff1
value: 6.474672753475723
- type: nauc_mrr_at_5_max
value: -5.7771100192539455
- type: nauc_ndcg_at_1000_diff1
value: 7.82257902156683
- type: nauc_ndcg_at_1000_max
value: -4.284182821313092
- type: nauc_ndcg_at_100_diff1
value: 7.982663390398444
- type: nauc_ndcg_at_100_max
value: -3.829916815248607
- type: nauc_ndcg_at_10_diff1
value: 7.940691283173258
- type: nauc_ndcg_at_10_max
value: -2.759653230358356
- type: nauc_ndcg_at_1_diff1
value: 8.676582434483777
- type: nauc_ndcg_at_1_max
value: -9.028400174777206
- type: nauc_ndcg_at_20_diff1
value: 8.216154898914834
- type: nauc_ndcg_at_20_max
value: -2.5281250069887644
- type: nauc_ndcg_at_3_diff1
value: 7.397192517335338
- type: nauc_ndcg_at_3_max
value: -5.506567290248059
- type: nauc_ndcg_at_5_diff1
value: 7.224597118942196
- type: nauc_ndcg_at_5_max
value: -2.8306642442626635
- type: nauc_precision_at_1000_diff1
value: -28.424446830488918
- type: nauc_precision_at_1000_max
value: 15.2996011292648
- type: nauc_precision_at_100_diff1
value: 13.817321356393311
- type: nauc_precision_at_100_max
value: 35.62226207701842
- type: nauc_precision_at_10_diff1
value: 9.129870306379654
- type: nauc_precision_at_10_max
value: 10.371721765203494
- type: nauc_precision_at_1_diff1
value: 8.676582434483777
- type: nauc_precision_at_1_max
value: -9.028400174777206
- type: nauc_precision_at_20_diff1
value: 14.378273723959786
- type: nauc_precision_at_20_max
value: 29.411949476113275
- type: nauc_precision_at_3_diff1
value: 6.924474252093112
- type: nauc_precision_at_3_max
value: -2.589098226592611
- type: nauc_precision_at_5_diff1
value: 6.523266339112172
- type: nauc_precision_at_5_max
value: 5.387183619653168
- type: nauc_recall_at_1000_diff1
value: -28.424446830487355
- type: nauc_recall_at_1000_max
value: 15.299601129265062
- type: nauc_recall_at_100_diff1
value: 13.817321356392966
- type: nauc_recall_at_100_max
value: 35.62226207701895
- type: nauc_recall_at_10_diff1
value: 9.129870306379667
- type: nauc_recall_at_10_max
value: 10.371721765203487
- type: nauc_recall_at_1_diff1
value: 8.676582434483777
- type: nauc_recall_at_1_max
value: -9.028400174777206
- type: nauc_recall_at_20_diff1
value: 14.378273723959634
- type: nauc_recall_at_20_max
value: 29.411949476113342
- type: nauc_recall_at_3_diff1
value: 6.924474252093177
- type: nauc_recall_at_3_max
value: -2.589098226592573
- type: nauc_recall_at_5_diff1
value: 6.5232663391122045
- type: nauc_recall_at_5_max
value: 5.38718361965314
- type: ndcg_at_1
value: 27.738000000000003
- type: ndcg_at_10
value: 51.867
- type: ndcg_at_100
value: 56.010000000000005
- type: ndcg_at_1000
value: 56.25599999999999
- type: ndcg_at_20
value: 54.872
- type: ndcg_at_3
value: 41.041
- type: ndcg_at_5
value: 46.7
- type: precision_at_1
value: 27.738000000000003
- type: precision_at_10
value: 8.036999999999999
- type: precision_at_100
value: 0.9780000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.605
- type: precision_at_3
value: 16.904
- type: precision_at_5
value: 12.888
- type: recall_at_1
value: 27.738000000000003
- type: recall_at_10
value: 80.36999999999999
- type: recall_at_100
value: 97.795
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 92.105
- type: recall_at_3
value: 50.711
- type: recall_at_5
value: 64.43799999999999
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.25989960324142
- type: v_measures
value:
- 0.4355201593114565
- 0.43400969136008477
- 0.4621501390953121
- 0.4368716556310582
- 0.46142659337392417
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.00514206925075
- type: v_measures
value:
- 0.38194577110919753
- 0.3641834175557571
- 0.3594175019099288
- 0.35392802860850453
- 0.36877397641003723
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.2761176425466
- type: mrr
value: 73.2264872708086
- type: nAUC_map_diff1
value: 7.0068523566823835
- type: nAUC_map_max
value: 21.51466581178718
- type: nAUC_mrr_diff1
value: 16.619643437951563
- type: nAUC_mrr_max
value: 32.302108972143714
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.84186171115664
- type: cos_sim_spearman
value: 87.13150342270266
- type: euclidean_pearson
value: 86.93703588206957
- type: euclidean_spearman
value: 87.13150342270266
- type: manhattan_pearson
value: 86.78921932668315
- type: manhattan_spearman
value: 86.73631369514506
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.46103896103897
- type: f1
value: 77.72189862815705
- type: f1_weighted
value: 77.72189862815705
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 36.380764168313746
- type: v_measures
value:
- 0.37382307264671194
- 0.36301531251106245
- 0.3793738872327412
- 0.35770158136125185
- 0.3555364559997305
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.538017334550577
- type: v_measures
value:
- 0.30690945509666684
- 0.31808990661791575
- 0.31514802723414864
- 0.3159451399149567
- 0.3171249261521223
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 32.464
- type: map_at_10
value: 42.91
- type: map_at_100
value: 44.253
- type: map_at_1000
value: 44.378
- type: map_at_20
value: 43.608000000000004
- type: map_at_3
value: 39.21
- type: map_at_5
value: 41.243
- type: mrr_at_1
value: 40.05722460658083
- type: mrr_at_10
value: 49.06050593818831
- type: mrr_at_100
value: 49.75550694943466
- type: mrr_at_1000
value: 49.80211926259268
- type: mrr_at_20
value: 49.466746239607275
- type: mrr_at_3
value: 46.256556986170715
- type: mrr_at_5
value: 48.05197901764422
- type: nauc_map_at_1000_diff1
value: 54.452516877596366
- type: nauc_map_at_1000_max
value: 42.37830524228681
- type: nauc_map_at_100_diff1
value: 54.455120677207944
- type: nauc_map_at_100_max
value: 42.35365218254892
- type: nauc_map_at_10_diff1
value: 54.411476414486735
- type: nauc_map_at_10_max
value: 41.799626583253286
- type: nauc_map_at_1_diff1
value: 56.844131977626574
- type: nauc_map_at_1_max
value: 39.50909847050082
- type: nauc_map_at_20_diff1
value: 54.474114038349
- type: nauc_map_at_20_max
value: 41.95011324047797
- type: nauc_map_at_3_diff1
value: 54.94801787552844
- type: nauc_map_at_3_max
value: 40.93589777136806
- type: nauc_map_at_5_diff1
value: 54.51000824622664
- type: nauc_map_at_5_max
value: 41.942988719761516
- type: nauc_mrr_at_1000_diff1
value: 54.73077714753202
- type: nauc_mrr_at_1000_max
value: 44.79399425080995
- type: nauc_mrr_at_100_diff1
value: 54.72534798039509
- type: nauc_mrr_at_100_max
value: 44.784407506240214
- type: nauc_mrr_at_10_diff1
value: 54.60375865684595
- type: nauc_mrr_at_10_max
value: 44.557364932034154
- type: nauc_mrr_at_1_diff1
value: 57.776230077156235
- type: nauc_mrr_at_1_max
value: 46.471474228551344
- type: nauc_mrr_at_20_diff1
value: 54.70304613396688
- type: nauc_mrr_at_20_max
value: 44.66734285522038
- type: nauc_mrr_at_3_diff1
value: 55.3933778729286
- type: nauc_mrr_at_3_max
value: 44.696022513397615
- type: nauc_mrr_at_5_diff1
value: 54.71004626344184
- type: nauc_mrr_at_5_max
value: 44.965744262291004
- type: nauc_ndcg_at_1000_diff1
value: 53.26255776636424
- type: nauc_ndcg_at_1000_max
value: 43.58732162869603
- type: nauc_ndcg_at_100_diff1
value: 52.99574164185918
- type: nauc_ndcg_at_100_max
value: 43.35845196216733
- type: nauc_ndcg_at_10_diff1
value: 52.660009377886766
- type: nauc_ndcg_at_10_max
value: 41.93002636395951
- type: nauc_ndcg_at_1_diff1
value: 57.776230077156235
- type: nauc_ndcg_at_1_max
value: 46.471474228551344
- type: nauc_ndcg_at_20_diff1
value: 52.925283797059066
- type: nauc_ndcg_at_20_max
value: 41.96413582256493
- type: nauc_ndcg_at_3_diff1
value: 53.86123945887276
- type: nauc_ndcg_at_3_max
value: 42.5192092394243
- type: nauc_ndcg_at_5_diff1
value: 53.02739573145395
- type: nauc_ndcg_at_5_max
value: 42.86255544029417
- type: nauc_precision_at_1000_diff1
value: -14.01444880844629
- type: nauc_precision_at_1000_max
value: -3.2426344768649065
- type: nauc_precision_at_100_diff1
value: -2.665892254195872
- type: nauc_precision_at_100_max
value: 11.174117765610346
- type: nauc_precision_at_10_diff1
value: 17.760500367118006
- type: nauc_precision_at_10_max
value: 25.819513742057314
- type: nauc_precision_at_1_diff1
value: 57.776230077156235
- type: nauc_precision_at_1_max
value: 46.471474228551344
- type: nauc_precision_at_20_diff1
value: 10.720961197841934
- type: nauc_precision_at_20_max
value: 20.104016753843656
- type: nauc_precision_at_3_diff1
value: 38.00682945145973
- type: nauc_precision_at_3_max
value: 39.91552880079303
- type: nauc_precision_at_5_diff1
value: 29.195186929472932
- type: nauc_precision_at_5_max
value: 36.060771452887344
- type: nauc_recall_at_1000_diff1
value: 34.24272104794043
- type: nauc_recall_at_1000_max
value: 56.510230841605825
- type: nauc_recall_at_100_diff1
value: 39.42477153393114
- type: nauc_recall_at_100_max
value: 41.44622822460404
- type: nauc_recall_at_10_diff1
value: 42.98765339932259
- type: nauc_recall_at_10_max
value: 34.34817326152696
- type: nauc_recall_at_1_diff1
value: 56.844131977626574
- type: nauc_recall_at_1_max
value: 39.50909847050082
- type: nauc_recall_at_20_diff1
value: 42.49763875384549
- type: nauc_recall_at_20_max
value: 34.211320392734436
- type: nauc_recall_at_3_diff1
value: 49.54385449610674
- type: nauc_recall_at_3_max
value: 37.050307605313755
- type: nauc_recall_at_5_diff1
value: 45.79369932076432
- type: nauc_recall_at_5_max
value: 38.06187420388636
- type: ndcg_at_1
value: 40.056999999999995
- type: ndcg_at_10
value: 49.228
- type: ndcg_at_100
value: 54.162
- type: ndcg_at_1000
value: 56.205000000000005
- type: ndcg_at_20
value: 51.034
- type: ndcg_at_3
value: 43.94
- type: ndcg_at_5
value: 46.504
- type: precision_at_1
value: 40.056999999999995
- type: precision_at_10
value: 9.528
- type: precision_at_100
value: 1.472
- type: precision_at_1000
value: 0.192
- type: precision_at_20
value: 5.494000000000001
- type: precision_at_3
value: 20.887
- type: precision_at_5
value: 15.193000000000001
- type: recall_at_1
value: 32.464
- type: recall_at_10
value: 60.831
- type: recall_at_100
value: 81.85900000000001
- type: recall_at_1000
value: 95.15
- type: recall_at_20
value: 67.657
- type: recall_at_3
value: 45.489000000000004
- type: recall_at_5
value: 52.839000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 27.204
- type: map_at_10
value: 36.565999999999995
- type: map_at_100
value: 37.706
- type: map_at_1000
value: 37.827
- type: map_at_20
value: 37.124
- type: map_at_3
value: 33.629999999999995
- type: map_at_5
value: 35.345
- type: mrr_at_1
value: 33.88535031847134
- type: mrr_at_10
value: 42.00993327267216
- type: mrr_at_100
value: 42.69324389051348
- type: mrr_at_1000
value: 42.742001707727674
- type: mrr_at_20
value: 42.383657095896574
- type: mrr_at_3
value: 39.61783439490446
- type: mrr_at_5
value: 41.02547770700639
- type: nauc_map_at_1000_diff1
value: 51.2434173008466
- type: nauc_map_at_1000_max
value: 42.89551471851398
- type: nauc_map_at_100_diff1
value: 51.25803151639599
- type: nauc_map_at_100_max
value: 42.84835058448656
- type: nauc_map_at_10_diff1
value: 51.392478648037475
- type: nauc_map_at_10_max
value: 42.520436932382275
- type: nauc_map_at_1_diff1
value: 58.311410816649435
- type: nauc_map_at_1_max
value: 38.78632287563295
- type: nauc_map_at_20_diff1
value: 51.2961213976287
- type: nauc_map_at_20_max
value: 42.5853171119173
- type: nauc_map_at_3_diff1
value: 52.988281664633696
- type: nauc_map_at_3_max
value: 41.255285265369714
- type: nauc_map_at_5_diff1
value: 51.90811230897579
- type: nauc_map_at_5_max
value: 42.25025338907201
- type: nauc_mrr_at_1000_diff1
value: 49.968831428382956
- type: nauc_mrr_at_1000_max
value: 44.835796996668066
- type: nauc_mrr_at_100_diff1
value: 49.96427305359113
- type: nauc_mrr_at_100_max
value: 44.83231841203824
- type: nauc_mrr_at_10_diff1
value: 49.94029375694121
- type: nauc_mrr_at_10_max
value: 44.88750685573963
- type: nauc_mrr_at_1_diff1
value: 56.40160291266728
- type: nauc_mrr_at_1_max
value: 45.557456279594454
- type: nauc_mrr_at_20_diff1
value: 49.948501688516814
- type: nauc_mrr_at_20_max
value: 44.86680460708911
- type: nauc_mrr_at_3_diff1
value: 50.70198183915884
- type: nauc_mrr_at_3_max
value: 44.69764399444459
- type: nauc_mrr_at_5_diff1
value: 50.171095819167164
- type: nauc_mrr_at_5_max
value: 44.81579964530846
- type: nauc_ndcg_at_1000_diff1
value: 48.15540595763338
- type: nauc_ndcg_at_1000_max
value: 43.71537508558133
- type: nauc_ndcg_at_100_diff1
value: 48.272332711054126
- type: nauc_ndcg_at_100_max
value: 43.6322161272428
- type: nauc_ndcg_at_10_diff1
value: 48.18746476758319
- type: nauc_ndcg_at_10_max
value: 43.36809828847912
- type: nauc_ndcg_at_1_diff1
value: 56.40160291266728
- type: nauc_ndcg_at_1_max
value: 45.557456279594454
- type: nauc_ndcg_at_20_diff1
value: 48.17338076733571
- type: nauc_ndcg_at_20_max
value: 43.20321056220099
- type: nauc_ndcg_at_3_diff1
value: 50.105385726603714
- type: nauc_ndcg_at_3_max
value: 43.120332022171915
- type: nauc_ndcg_at_5_diff1
value: 48.8630439359171
- type: nauc_ndcg_at_5_max
value: 43.370066685949446
- type: nauc_precision_at_1000_diff1
value: -11.350357006977681
- type: nauc_precision_at_1000_max
value: 12.582096763421525
- type: nauc_precision_at_100_diff1
value: -2.958036676255927
- type: nauc_precision_at_100_max
value: 22.696487876398155
- type: nauc_precision_at_10_diff1
value: 16.189353401009978
- type: nauc_precision_at_10_max
value: 38.13442143062204
- type: nauc_precision_at_1_diff1
value: 56.40160291266728
- type: nauc_precision_at_1_max
value: 45.557456279594454
- type: nauc_precision_at_20_diff1
value: 8.797894276494759
- type: nauc_precision_at_20_max
value: 32.53273333292465
- type: nauc_precision_at_3_diff1
value: 33.166335907173284
- type: nauc_precision_at_3_max
value: 43.54274244894697
- type: nauc_precision_at_5_diff1
value: 24.890638457308835
- type: nauc_precision_at_5_max
value: 41.85350446231859
- type: nauc_recall_at_1000_diff1
value: 30.966590369789042
- type: nauc_recall_at_1000_max
value: 40.05217189462629
- type: nauc_recall_at_100_diff1
value: 36.50185764515
- type: nauc_recall_at_100_max
value: 40.40225246071926
- type: nauc_recall_at_10_diff1
value: 39.21222014929766
- type: nauc_recall_at_10_max
value: 39.510881532659106
- type: nauc_recall_at_1_diff1
value: 58.311410816649435
- type: nauc_recall_at_1_max
value: 38.78632287563295
- type: nauc_recall_at_20_diff1
value: 37.90625645817586
- type: nauc_recall_at_20_max
value: 38.83095083812718
- type: nauc_recall_at_3_diff1
value: 45.67833862057349
- type: nauc_recall_at_3_max
value: 38.301387193837385
- type: nauc_recall_at_5_diff1
value: 41.97377098035378
- type: nauc_recall_at_5_max
value: 39.44428303279494
- type: ndcg_at_1
value: 33.885
- type: ndcg_at_10
value: 41.959999999999994
- type: ndcg_at_100
value: 46.444
- type: ndcg_at_1000
value: 48.542
- type: ndcg_at_20
value: 43.516
- type: ndcg_at_3
value: 37.662
- type: ndcg_at_5
value: 39.694
- type: precision_at_1
value: 33.885
- type: precision_at_10
value: 7.911
- type: precision_at_100
value: 1.306
- type: precision_at_1000
value: 0.17600000000000002
- type: precision_at_20
value: 4.634
- type: precision_at_3
value: 18.195
- type: precision_at_5
value: 13.032
- type: recall_at_1
value: 27.204
- type: recall_at_10
value: 51.964999999999996
- type: recall_at_100
value: 70.977
- type: recall_at_1000
value: 84.48
- type: recall_at_20
value: 57.568
- type: recall_at_3
value: 39.292
- type: recall_at_5
value: 45.051
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 40.092
- type: map_at_10
value: 51.94499999999999
- type: map_at_100
value: 52.952999999999996
- type: map_at_1000
value: 53.010999999999996
- type: map_at_20
value: 52.544000000000004
- type: map_at_3
value: 48.798
- type: map_at_5
value: 50.575
- type: mrr_at_1
value: 45.57993730407524
- type: mrr_at_10
value: 55.19226750261236
- type: mrr_at_100
value: 55.86648188807898
- type: mrr_at_1000
value: 55.893179256732914
- type: mrr_at_20
value: 55.605862164423684
- type: mrr_at_3
value: 52.737722048066914
- type: mrr_at_5
value: 54.142110762800485
- type: nauc_map_at_1000_diff1
value: 56.4825391687947
- type: nauc_map_at_1000_max
value: 42.69718253696078
- type: nauc_map_at_100_diff1
value: 56.472594143313714
- type: nauc_map_at_100_max
value: 42.68390703868357
- type: nauc_map_at_10_diff1
value: 56.60851591326905
- type: nauc_map_at_10_max
value: 42.545318648853254
- type: nauc_map_at_1_diff1
value: 58.78580013920528
- type: nauc_map_at_1_max
value: 37.250129371959034
- type: nauc_map_at_20_diff1
value: 56.522847775596496
- type: nauc_map_at_20_max
value: 42.67436226258157
- type: nauc_map_at_3_diff1
value: 56.41058843854863
- type: nauc_map_at_3_max
value: 41.1447087205128
- type: nauc_map_at_5_diff1
value: 56.615742462460375
- type: nauc_map_at_5_max
value: 42.18432838422091
- type: nauc_mrr_at_1000_diff1
value: 57.02890931424712
- type: nauc_mrr_at_1000_max
value: 44.542419456217395
- type: nauc_mrr_at_100_diff1
value: 57.015358825202966
- type: nauc_mrr_at_100_max
value: 44.53905706111591
- type: nauc_mrr_at_10_diff1
value: 57.04585925976531
- type: nauc_mrr_at_10_max
value: 44.61348989967417
- type: nauc_mrr_at_1_diff1
value: 59.97805122993276
- type: nauc_mrr_at_1_max
value: 43.74889272537995
- type: nauc_mrr_at_20_diff1
value: 57.006157936095484
- type: nauc_mrr_at_20_max
value: 44.59603635627128
- type: nauc_mrr_at_3_diff1
value: 56.917680357532504
- type: nauc_mrr_at_3_max
value: 44.16899447567816
- type: nauc_mrr_at_5_diff1
value: 56.92085593489732
- type: nauc_mrr_at_5_max
value: 44.6067245655727
- type: nauc_ndcg_at_1000_diff1
value: 55.99301213747579
- type: nauc_ndcg_at_1000_max
value: 44.13571318751295
- type: nauc_ndcg_at_100_diff1
value: 55.69267672766463
- type: nauc_ndcg_at_100_max
value: 44.01363451781653
- type: nauc_ndcg_at_10_diff1
value: 56.08101977835497
- type: nauc_ndcg_at_10_max
value: 44.04565223998733
- type: nauc_ndcg_at_1_diff1
value: 59.97805122993276
- type: nauc_ndcg_at_1_max
value: 43.74889272537995
- type: nauc_ndcg_at_20_diff1
value: 55.9395678717101
- type: nauc_ndcg_at_20_max
value: 44.276016640316584
- type: nauc_ndcg_at_3_diff1
value: 55.61181442897005
- type: nauc_ndcg_at_3_max
value: 42.505752873203875
- type: nauc_ndcg_at_5_diff1
value: 55.931533774058074
- type: nauc_ndcg_at_5_max
value: 43.62473544458933
- type: nauc_precision_at_1000_diff1
value: -13.91854408201959
- type: nauc_precision_at_1000_max
value: 8.959796412269117
- type: nauc_precision_at_100_diff1
value: -6.051946111858287
- type: nauc_precision_at_100_max
value: 15.919914740220339
- type: nauc_precision_at_10_diff1
value: 19.584103811469795
- type: nauc_precision_at_10_max
value: 32.679327751531886
- type: nauc_precision_at_1_diff1
value: 59.97805122993276
- type: nauc_precision_at_1_max
value: 43.74889272537995
- type: nauc_precision_at_20_diff1
value: 10.708778552460565
- type: nauc_precision_at_20_max
value: 27.76302369902412
- type: nauc_precision_at_3_diff1
value: 37.72812007268646
- type: nauc_precision_at_3_max
value: 40.9098215392736
- type: nauc_precision_at_5_diff1
value: 29.8927353855664
- type: nauc_precision_at_5_max
value: 38.77105720875548
- type: nauc_recall_at_1000_diff1
value: 49.43352356696205
- type: nauc_recall_at_1000_max
value: 54.87661194579149
- type: nauc_recall_at_100_diff1
value: 46.097079017519164
- type: nauc_recall_at_100_max
value: 44.71576750940437
- type: nauc_recall_at_10_diff1
value: 52.66222731151876
- type: nauc_recall_at_10_max
value: 44.98829150528362
- type: nauc_recall_at_1_diff1
value: 58.78580013920528
- type: nauc_recall_at_1_max
value: 37.250129371959034
- type: nauc_recall_at_20_diff1
value: 51.55229424448056
- type: nauc_recall_at_20_max
value: 46.82438704609937
- type: nauc_recall_at_3_diff1
value: 52.87567308584833
- type: nauc_recall_at_3_max
value: 41.13816110816809
- type: nauc_recall_at_5_diff1
value: 52.83904035972527
- type: nauc_recall_at_5_max
value: 43.519187012248025
- type: ndcg_at_1
value: 45.58
- type: ndcg_at_10
value: 57.534
- type: ndcg_at_100
value: 61.6
- type: ndcg_at_1000
value: 62.742
- type: ndcg_at_20
value: 59.270999999999994
- type: ndcg_at_3
value: 52.193
- type: ndcg_at_5
value: 54.725
- type: precision_at_1
value: 45.58
- type: precision_at_10
value: 9.129
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.136
- type: precision_at_20
value: 5.082
- type: precision_at_3
value: 23.051
- type: precision_at_5
value: 15.748999999999999
- type: recall_at_1
value: 40.092
- type: recall_at_10
value: 70.889
- type: recall_at_100
value: 88.45100000000001
- type: recall_at_1000
value: 96.447
- type: recall_at_20
value: 77.239
- type: recall_at_3
value: 56.489999999999995
- type: recall_at_5
value: 62.647
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 20.277
- type: map_at_10
value: 27.969
- type: map_at_100
value: 28.933999999999997
- type: map_at_1000
value: 29.038000000000004
- type: map_at_20
value: 28.486
- type: map_at_3
value: 25.418000000000003
- type: map_at_5
value: 26.709
- type: mrr_at_1
value: 21.694915254237287
- type: mrr_at_10
value: 29.60447493498341
- type: mrr_at_100
value: 30.512620484407922
- type: mrr_at_1000
value: 30.591656063000244
- type: mrr_at_20
value: 30.121039448386362
- type: mrr_at_3
value: 27.19397363465162
- type: mrr_at_5
value: 28.448210922787204
- type: nauc_map_at_1000_diff1
value: 45.363822324742415
- type: nauc_map_at_1000_max
value: 29.509295922731372
- type: nauc_map_at_100_diff1
value: 45.331606636295966
- type: nauc_map_at_100_max
value: 29.505487961576044
- type: nauc_map_at_10_diff1
value: 45.38885724901821
- type: nauc_map_at_10_max
value: 28.946567703884206
- type: nauc_map_at_1_diff1
value: 54.7423053253451
- type: nauc_map_at_1_max
value: 30.202680999900373
- type: nauc_map_at_20_diff1
value: 45.43050910968979
- type: nauc_map_at_20_max
value: 29.35034157464228
- type: nauc_map_at_3_diff1
value: 47.51058437623272
- type: nauc_map_at_3_max
value: 29.740666890218183
- type: nauc_map_at_5_diff1
value: 45.90224481327457
- type: nauc_map_at_5_max
value: 28.80931616822233
- type: nauc_mrr_at_1000_diff1
value: 45.38410126007919
- type: nauc_mrr_at_1000_max
value: 31.462744626905998
- type: nauc_mrr_at_100_diff1
value: 45.36092538448443
- type: nauc_mrr_at_100_max
value: 31.48088167063395
- type: nauc_mrr_at_10_diff1
value: 45.38915659941786
- type: nauc_mrr_at_10_max
value: 31.07285932737546
- type: nauc_mrr_at_1_diff1
value: 54.930775252405226
- type: nauc_mrr_at_1_max
value: 33.5116859460449
- type: nauc_mrr_at_20_diff1
value: 45.418208062544515
- type: nauc_mrr_at_20_max
value: 31.423541265829346
- type: nauc_mrr_at_3_diff1
value: 47.521366857933685
- type: nauc_mrr_at_3_max
value: 32.27219903173878
- type: nauc_mrr_at_5_diff1
value: 45.77958904462302
- type: nauc_mrr_at_5_max
value: 31.03500930751467
- type: nauc_ndcg_at_1000_diff1
value: 41.635595953748044
- type: nauc_ndcg_at_1000_max
value: 29.99928035763284
- type: nauc_ndcg_at_100_diff1
value: 40.88371796938874
- type: nauc_ndcg_at_100_max
value: 30.281800111940075
- type: nauc_ndcg_at_10_diff1
value: 41.21530633973889
- type: nauc_ndcg_at_10_max
value: 28.03405573161477
- type: nauc_ndcg_at_1_diff1
value: 54.930775252405226
- type: nauc_ndcg_at_1_max
value: 33.5116859460449
- type: nauc_ndcg_at_20_diff1
value: 41.354735186387494
- type: nauc_ndcg_at_20_max
value: 29.290000578859498
- type: nauc_ndcg_at_3_diff1
value: 45.31600437511932
- type: nauc_ndcg_at_3_max
value: 30.107259401213447
- type: nauc_ndcg_at_5_diff1
value: 42.46548676756585
- type: nauc_ndcg_at_5_max
value: 28.066140473016777
- type: nauc_precision_at_1000_diff1
value: 3.3001494044534
- type: nauc_precision_at_1000_max
value: 23.86920410371473
- type: nauc_precision_at_100_diff1
value: 14.600936720175225
- type: nauc_precision_at_100_max
value: 34.79005291009276
- type: nauc_precision_at_10_diff1
value: 25.191040884814313
- type: nauc_precision_at_10_max
value: 27.787630029931737
- type: nauc_precision_at_1_diff1
value: 54.930775252405226
- type: nauc_precision_at_1_max
value: 33.5116859460449
- type: nauc_precision_at_20_diff1
value: 23.94526878532444
- type: nauc_precision_at_20_max
value: 31.64356816310904
- type: nauc_precision_at_3_diff1
value: 37.36686654535447
- type: nauc_precision_at_3_max
value: 31.809307942763166
- type: nauc_precision_at_5_diff1
value: 30.658077015337877
- type: nauc_precision_at_5_max
value: 27.987876687409614
- type: nauc_recall_at_1000_diff1
value: 16.313332570062347
- type: nauc_recall_at_1000_max
value: 24.611193005552156
- type: nauc_recall_at_100_diff1
value: 22.517502540871675
- type: nauc_recall_at_100_max
value: 30.270048758028008
- type: nauc_recall_at_10_diff1
value: 28.942161278215146
- type: nauc_recall_at_10_max
value: 22.367333726084272
- type: nauc_recall_at_1_diff1
value: 54.7423053253451
- type: nauc_recall_at_1_max
value: 30.202680999900373
- type: nauc_recall_at_20_diff1
value: 28.593619502228517
- type: nauc_recall_at_20_max
value: 26.029280134895316
- type: nauc_recall_at_3_diff1
value: 38.74661393996696
- type: nauc_recall_at_3_max
value: 27.640169897015266
- type: nauc_recall_at_5_diff1
value: 32.83863931854332
- type: nauc_recall_at_5_max
value: 23.190141862761386
- type: ndcg_at_1
value: 21.695
- type: ndcg_at_10
value: 32.698
- type: ndcg_at_100
value: 37.641000000000005
- type: ndcg_at_1000
value: 40.245
- type: ndcg_at_20
value: 34.55
- type: ndcg_at_3
value: 27.632
- type: ndcg_at_5
value: 29.814
- type: precision_at_1
value: 21.695
- type: precision_at_10
value: 5.311
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.108
- type: precision_at_20
value: 3.073
- type: precision_at_3
value: 11.977
- type: precision_at_5
value: 8.497
- type: recall_at_1
value: 20.277
- type: recall_at_10
value: 45.751999999999995
- type: recall_at_100
value: 68.72500000000001
- type: recall_at_1000
value: 88.307
- type: recall_at_20
value: 52.845
- type: recall_at_3
value: 31.928
- type: recall_at_5
value: 37.129
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 13.061
- type: map_at_10
value: 19.952
- type: map_at_100
value: 21.177
- type: map_at_1000
value: 21.304000000000002
- type: map_at_20
value: 20.543
- type: map_at_3
value: 17.552
- type: map_at_5
value: 18.83
- type: mrr_at_1
value: 16.417910447761194
- type: mrr_at_10
value: 23.800146094922205
- type: mrr_at_100
value: 24.80931429553203
- type: mrr_at_1000
value: 24.883782140653754
- type: mrr_at_20
value: 24.263678541366147
- type: mrr_at_3
value: 21.22719734660033
- type: mrr_at_5
value: 22.763266998341624
- type: nauc_map_at_1000_diff1
value: 28.454223094067572
- type: nauc_map_at_1000_max
value: 16.318450252339733
- type: nauc_map_at_100_diff1
value: 28.45468920929943
- type: nauc_map_at_100_max
value: 16.280745093761492
- type: nauc_map_at_10_diff1
value: 28.475318654355913
- type: nauc_map_at_10_max
value: 16.371297539246918
- type: nauc_map_at_1_diff1
value: 31.42806317607707
- type: nauc_map_at_1_max
value: 16.441570076459733
- type: nauc_map_at_20_diff1
value: 28.39188164693206
- type: nauc_map_at_20_max
value: 16.395102199165805
- type: nauc_map_at_3_diff1
value: 29.86515593563127
- type: nauc_map_at_3_max
value: 15.942028463305657
- type: nauc_map_at_5_diff1
value: 28.801171298398238
- type: nauc_map_at_5_max
value: 16.4938392502406
- type: nauc_mrr_at_1000_diff1
value: 29.404578525355905
- type: nauc_mrr_at_1000_max
value: 17.06043073367033
- type: nauc_mrr_at_100_diff1
value: 29.36603539662459
- type: nauc_mrr_at_100_max
value: 17.04266140433032
- type: nauc_mrr_at_10_diff1
value: 29.53064924621513
- type: nauc_mrr_at_10_max
value: 17.262149332295344
- type: nauc_mrr_at_1_diff1
value: 34.4591422232893
- type: nauc_mrr_at_1_max
value: 18.38441531072269
- type: nauc_mrr_at_20_diff1
value: 29.460658307695237
- type: nauc_mrr_at_20_max
value: 17.177008114438692
- type: nauc_mrr_at_3_diff1
value: 31.401082424691413
- type: nauc_mrr_at_3_max
value: 17.3600916517026
- type: nauc_mrr_at_5_diff1
value: 30.215482150264055
- type: nauc_mrr_at_5_max
value: 17.113047363760682
- type: nauc_ndcg_at_1000_diff1
value: 26.56223239664528
- type: nauc_ndcg_at_1000_max
value: 17.183346228015548
- type: nauc_ndcg_at_100_diff1
value: 26.452687521093587
- type: nauc_ndcg_at_100_max
value: 16.227786533978918
- type: nauc_ndcg_at_10_diff1
value: 26.789816487503863
- type: nauc_ndcg_at_10_max
value: 16.746307834455642
- type: nauc_ndcg_at_1_diff1
value: 34.4591422232893
- type: nauc_ndcg_at_1_max
value: 18.38441531072269
- type: nauc_ndcg_at_20_diff1
value: 26.5048210959802
- type: nauc_ndcg_at_20_max
value: 16.715661819049974
- type: nauc_ndcg_at_3_diff1
value: 29.856963970494903
- type: nauc_ndcg_at_3_max
value: 16.30448334725527
- type: nauc_ndcg_at_5_diff1
value: 27.70865544302564
- type: nauc_ndcg_at_5_max
value: 16.848739633174784
- type: nauc_precision_at_1000_diff1
value: 2.8134729866690966
- type: nauc_precision_at_1000_max
value: 4.912645906722423
- type: nauc_precision_at_100_diff1
value: 12.322710301703319
- type: nauc_precision_at_100_max
value: 7.326042531678355
- type: nauc_precision_at_10_diff1
value: 21.62505224748476
- type: nauc_precision_at_10_max
value: 13.898621571795822
- type: nauc_precision_at_1_diff1
value: 34.4591422232893
- type: nauc_precision_at_1_max
value: 18.38441531072269
- type: nauc_precision_at_20_diff1
value: 18.470843518995792
- type: nauc_precision_at_20_max
value: 12.518698137323229
- type: nauc_precision_at_3_diff1
value: 29.716961833159882
- type: nauc_precision_at_3_max
value: 15.75778937513801
- type: nauc_precision_at_5_diff1
value: 25.58620475567927
- type: nauc_precision_at_5_max
value: 15.305405508622808
- type: nauc_recall_at_1000_diff1
value: 14.510689791158532
- type: nauc_recall_at_1000_max
value: 28.477079172098108
- type: nauc_recall_at_100_diff1
value: 17.76094115011345
- type: nauc_recall_at_100_max
value: 14.496601818232598
- type: nauc_recall_at_10_diff1
value: 20.649486228934894
- type: nauc_recall_at_10_max
value: 16.53168596633394
- type: nauc_recall_at_1_diff1
value: 31.42806317607707
- type: nauc_recall_at_1_max
value: 16.441570076459733
- type: nauc_recall_at_20_diff1
value: 19.624191122275327
- type: nauc_recall_at_20_max
value: 16.252919740686675
- type: nauc_recall_at_3_diff1
value: 26.283117235001736
- type: nauc_recall_at_3_max
value: 15.13584134035691
- type: nauc_recall_at_5_diff1
value: 22.423046028190434
- type: nauc_recall_at_5_max
value: 16.432091896884675
- type: ndcg_at_1
value: 16.418
- type: ndcg_at_10
value: 24.565
- type: ndcg_at_100
value: 30.801000000000002
- type: ndcg_at_1000
value: 33.806999999999995
- type: ndcg_at_20
value: 26.512999999999998
- type: ndcg_at_3
value: 20.036
- type: ndcg_at_5
value: 22.131999999999998
- type: precision_at_1
value: 16.418
- type: precision_at_10
value: 4.776
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.131
- type: precision_at_20
value: 2.91
- type: precision_at_3
value: 9.701
- type: precision_at_5
value: 7.289
- type: recall_at_1
value: 13.061
- type: recall_at_10
value: 34.961999999999996
- type: recall_at_100
value: 63.068000000000005
- type: recall_at_1000
value: 84.441
- type: recall_at_20
value: 42.013
- type: recall_at_3
value: 22.584
- type: recall_at_5
value: 27.950999999999997
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 24.813
- type: map_at_10
value: 33.593
- type: map_at_100
value: 34.909
- type: map_at_1000
value: 35.03
- type: map_at_20
value: 34.314
- type: map_at_3
value: 30.537999999999997
- type: map_at_5
value: 31.926
- type: mrr_at_1
value: 30.702598652550527
- type: mrr_at_10
value: 39.06545518431948
- type: mrr_at_100
value: 39.96716518258886
- type: mrr_at_1000
value: 40.01632070119006
- type: mrr_at_20
value: 39.530652551228556
- type: mrr_at_3
value: 36.58966955405838
- type: mrr_at_5
value: 37.75906320179657
- type: nauc_map_at_1000_diff1
value: 48.28735795160243
- type: nauc_map_at_1000_max
value: 35.857827572549965
- type: nauc_map_at_100_diff1
value: 48.28389536809195
- type: nauc_map_at_100_max
value: 35.8215491031998
- type: nauc_map_at_10_diff1
value: 48.4740670479628
- type: nauc_map_at_10_max
value: 35.580153501789354
- type: nauc_map_at_1_diff1
value: 53.76162783827061
- type: nauc_map_at_1_max
value: 35.987724515959904
- type: nauc_map_at_20_diff1
value: 48.39315138781542
- type: nauc_map_at_20_max
value: 35.67428155833968
- type: nauc_map_at_3_diff1
value: 49.222529850894034
- type: nauc_map_at_3_max
value: 35.79333062105859
- type: nauc_map_at_5_diff1
value: 49.39208013650273
- type: nauc_map_at_5_max
value: 35.69664733704277
- type: nauc_mrr_at_1000_diff1
value: 49.444671325795056
- type: nauc_mrr_at_1000_max
value: 38.3202725278323
- type: nauc_mrr_at_100_diff1
value: 49.42200393852792
- type: nauc_mrr_at_100_max
value: 38.31143458434877
- type: nauc_mrr_at_10_diff1
value: 49.370146109325866
- type: nauc_mrr_at_10_max
value: 38.160124098730236
- type: nauc_mrr_at_1_diff1
value: 56.1525302339788
- type: nauc_mrr_at_1_max
value: 40.79527109574158
- type: nauc_mrr_at_20_diff1
value: 49.45102444653989
- type: nauc_mrr_at_20_max
value: 38.25756256181159
- type: nauc_mrr_at_3_diff1
value: 50.14838465022493
- type: nauc_mrr_at_3_max
value: 38.52708666540869
- type: nauc_mrr_at_5_diff1
value: 49.904819237426004
- type: nauc_mrr_at_5_max
value: 38.38035220573582
- type: nauc_ndcg_at_1000_diff1
value: 45.58168600860826
- type: nauc_ndcg_at_1000_max
value: 36.563706940380555
- type: nauc_ndcg_at_100_diff1
value: 45.04365109638583
- type: nauc_ndcg_at_100_max
value: 36.033369265758196
- type: nauc_ndcg_at_10_diff1
value: 45.93349643770066
- type: nauc_ndcg_at_10_max
value: 34.89977214117315
- type: nauc_ndcg_at_1_diff1
value: 56.1525302339788
- type: nauc_ndcg_at_1_max
value: 40.79527109574158
- type: nauc_ndcg_at_20_diff1
value: 45.7339281351243
- type: nauc_ndcg_at_20_max
value: 35.06101137124627
- type: nauc_ndcg_at_3_diff1
value: 47.98894511737239
- type: nauc_ndcg_at_3_max
value: 36.848781920983924
- type: nauc_ndcg_at_5_diff1
value: 47.734244333699536
- type: nauc_ndcg_at_5_max
value: 35.698338385055536
- type: nauc_precision_at_1000_diff1
value: -10.146987477879856
- type: nauc_precision_at_1000_max
value: 4.5697671337926735
- type: nauc_precision_at_100_diff1
value: -0.23979892626648153
- type: nauc_precision_at_100_max
value: 16.05933541503149
- type: nauc_precision_at_10_diff1
value: 21.283169904859836
- type: nauc_precision_at_10_max
value: 26.180251486947377
- type: nauc_precision_at_1_diff1
value: 56.1525302339788
- type: nauc_precision_at_1_max
value: 40.79527109574158
- type: nauc_precision_at_20_diff1
value: 15.36557800905963
- type: nauc_precision_at_20_max
value: 23.405016571239443
- type: nauc_precision_at_3_diff1
value: 38.577821157959704
- type: nauc_precision_at_3_max
value: 36.41036363960068
- type: nauc_precision_at_5_diff1
value: 34.52857379797606
- type: nauc_precision_at_5_max
value: 33.5572774413252
- type: nauc_recall_at_1000_diff1
value: 17.59228181321267
- type: nauc_recall_at_1000_max
value: 38.45146719434902
- type: nauc_recall_at_100_diff1
value: 23.718352007830674
- type: nauc_recall_at_100_max
value: 30.191167925909102
- type: nauc_recall_at_10_diff1
value: 35.33833540132451
- type: nauc_recall_at_10_max
value: 27.994679353081946
- type: nauc_recall_at_1_diff1
value: 53.76162783827061
- type: nauc_recall_at_1_max
value: 35.987724515959904
- type: nauc_recall_at_20_diff1
value: 34.02078540569994
- type: nauc_recall_at_20_max
value: 27.685543279801237
- type: nauc_recall_at_3_diff1
value: 42.17648880447465
- type: nauc_recall_at_3_max
value: 32.62814119536231
- type: nauc_recall_at_5_diff1
value: 41.358303440502375
- type: nauc_recall_at_5_max
value: 30.630510317531744
- type: ndcg_at_1
value: 30.703000000000003
- type: ndcg_at_10
value: 39.300000000000004
- type: ndcg_at_100
value: 45.1
- type: ndcg_at_1000
value: 47.386
- type: ndcg_at_20
value: 41.461999999999996
- type: ndcg_at_3
value: 34.277
- type: ndcg_at_5
value: 36.027
- type: precision_at_1
value: 30.703000000000003
- type: precision_at_10
value: 7.401000000000001
- type: precision_at_100
value: 1.218
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_20
value: 4.398
- type: precision_at_3
value: 16.426
- type: precision_at_5
value: 11.472999999999999
- type: recall_at_1
value: 24.813
- type: recall_at_10
value: 51.044999999999995
- type: recall_at_100
value: 76.106
- type: recall_at_1000
value: 91.19
- type: recall_at_20
value: 58.744
- type: recall_at_3
value: 36.222
- type: recall_at_5
value: 41.349000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 23.785999999999998
- type: map_at_10
value: 32.908
- type: map_at_100
value: 34.216
- type: map_at_1000
value: 34.345
- type: map_at_20
value: 33.669
- type: map_at_3
value: 30.084
- type: map_at_5
value: 31.692999999999998
- type: mrr_at_1
value: 28.538812785388128
- type: mrr_at_10
value: 37.57986337609625
- type: mrr_at_100
value: 38.51813582385385
- type: mrr_at_1000
value: 38.57741905983437
- type: mrr_at_20
value: 38.12193172703318
- type: mrr_at_3
value: 35.10273972602741
- type: mrr_at_5
value: 36.63812785388129
- type: nauc_map_at_1000_diff1
value: 50.257939242705326
- type: nauc_map_at_1000_max
value: 40.82148200341322
- type: nauc_map_at_100_diff1
value: 50.2364465622847
- type: nauc_map_at_100_max
value: 40.80019994119006
- type: nauc_map_at_10_diff1
value: 50.37138441336593
- type: nauc_map_at_10_max
value: 40.50437470647407
- type: nauc_map_at_1_diff1
value: 56.771276390581896
- type: nauc_map_at_1_max
value: 39.21668184007358
- type: nauc_map_at_20_diff1
value: 50.27381803917208
- type: nauc_map_at_20_max
value: 40.64985824674605
- type: nauc_map_at_3_diff1
value: 51.75055615731813
- type: nauc_map_at_3_max
value: 39.78757508816205
- type: nauc_map_at_5_diff1
value: 50.8734893320248
- type: nauc_map_at_5_max
value: 40.02410017589514
- type: nauc_mrr_at_1000_diff1
value: 50.713285499907116
- type: nauc_mrr_at_1000_max
value: 41.177149045516344
- type: nauc_mrr_at_100_diff1
value: 50.68880760672645
- type: nauc_mrr_at_100_max
value: 41.15209896241968
- type: nauc_mrr_at_10_diff1
value: 50.74366283036783
- type: nauc_mrr_at_10_max
value: 41.282470784533444
- type: nauc_mrr_at_1_diff1
value: 57.84445991591638
- type: nauc_mrr_at_1_max
value: 43.215624153592266
- type: nauc_mrr_at_20_diff1
value: 50.64084950415708
- type: nauc_mrr_at_20_max
value: 41.08394597751598
- type: nauc_mrr_at_3_diff1
value: 51.65659849102624
- type: nauc_mrr_at_3_max
value: 41.60925465118275
- type: nauc_mrr_at_5_diff1
value: 50.950695800932365
- type: nauc_mrr_at_5_max
value: 41.279261412729824
- type: nauc_ndcg_at_1000_diff1
value: 47.749991208061715
- type: nauc_ndcg_at_1000_max
value: 41.127897310574845
- type: nauc_ndcg_at_100_diff1
value: 47.105720634992295
- type: nauc_ndcg_at_100_max
value: 40.47798885234411
- type: nauc_ndcg_at_10_diff1
value: 47.531993133798885
- type: nauc_ndcg_at_10_max
value: 40.31306642364056
- type: nauc_ndcg_at_1_diff1
value: 57.84445991591638
- type: nauc_ndcg_at_1_max
value: 43.215624153592266
- type: nauc_ndcg_at_20_diff1
value: 47.312682165844976
- type: nauc_ndcg_at_20_max
value: 40.288339506648256
- type: nauc_ndcg_at_3_diff1
value: 50.0455086369841
- type: nauc_ndcg_at_3_max
value: 40.54078090764959
- type: nauc_ndcg_at_5_diff1
value: 48.579497493398414
- type: nauc_ndcg_at_5_max
value: 40.12205334756227
- type: nauc_precision_at_1000_diff1
value: -10.060709814882319
- type: nauc_precision_at_1000_max
value: 3.3172918415791113
- type: nauc_precision_at_100_diff1
value: 2.693932701389068
- type: nauc_precision_at_100_max
value: 18.87077564079401
- type: nauc_precision_at_10_diff1
value: 25.065184074398722
- type: nauc_precision_at_10_max
value: 37.59757586895368
- type: nauc_precision_at_1_diff1
value: 57.84445991591638
- type: nauc_precision_at_1_max
value: 43.215624153592266
- type: nauc_precision_at_20_diff1
value: 18.242635445538607
- type: nauc_precision_at_20_max
value: 32.84491156206929
- type: nauc_precision_at_3_diff1
value: 40.10849878825809
- type: nauc_precision_at_3_max
value: 41.60542328360796
- type: nauc_precision_at_5_diff1
value: 33.704250787432464
- type: nauc_precision_at_5_max
value: 39.62104984042399
- type: nauc_recall_at_1000_diff1
value: 20.865834650783867
- type: nauc_recall_at_1000_max
value: 40.46838059326433
- type: nauc_recall_at_100_diff1
value: 27.936404516968814
- type: nauc_recall_at_100_max
value: 31.42492046918158
- type: nauc_recall_at_10_diff1
value: 36.13832214819092
- type: nauc_recall_at_10_max
value: 35.455673437705734
- type: nauc_recall_at_1_diff1
value: 56.771276390581896
- type: nauc_recall_at_1_max
value: 39.21668184007358
- type: nauc_recall_at_20_diff1
value: 34.82064123770155
- type: nauc_recall_at_20_max
value: 34.607469690168344
- type: nauc_recall_at_3_diff1
value: 44.00375363375299
- type: nauc_recall_at_3_max
value: 36.44058673547244
- type: nauc_recall_at_5_diff1
value: 40.13678179130244
- type: nauc_recall_at_5_max
value: 35.72641024766304
- type: ndcg_at_1
value: 28.538999999999998
- type: ndcg_at_10
value: 38.234
- type: ndcg_at_100
value: 44.025
- type: ndcg_at_1000
value: 46.611999999999995
- type: ndcg_at_20
value: 40.528999999999996
- type: ndcg_at_3
value: 33.603
- type: ndcg_at_5
value: 35.91
- type: precision_at_1
value: 28.538999999999998
- type: precision_at_10
value: 6.848999999999999
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.155
- type: precision_at_20
value: 4.15
- type: precision_at_3
value: 16.02
- type: precision_at_5
value: 11.416
- type: recall_at_1
value: 23.785999999999998
- type: recall_at_10
value: 49.475
- type: recall_at_100
value: 74.211
- type: recall_at_1000
value: 91.69699999999999
- type: recall_at_20
value: 57.399
- type: recall_at_3
value: 36.716
- type: recall_at_5
value: 42.698
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 19.383
- type: map_at_10
value: 25.779999999999998
- type: map_at_100
value: 26.692
- type: map_at_1000
value: 26.790000000000003
- type: map_at_20
value: 26.288
- type: map_at_3
value: 23.286
- type: map_at_5
value: 24.881
- type: mrr_at_1
value: 21.625766871165645
- type: mrr_at_10
value: 28.134677183756935
- type: mrr_at_100
value: 28.990470265944246
- type: mrr_at_1000
value: 29.06015337908212
- type: mrr_at_20
value: 28.6217210745573
- type: mrr_at_3
value: 25.894683026584875
- type: mrr_at_5
value: 27.29038854805726
- type: nauc_map_at_1000_diff1
value: 49.911847914254295
- type: nauc_map_at_1000_max
value: 35.48333790982145
- type: nauc_map_at_100_diff1
value: 49.90404203205197
- type: nauc_map_at_100_max
value: 35.467667772052444
- type: nauc_map_at_10_diff1
value: 50.116677861558436
- type: nauc_map_at_10_max
value: 35.22777804014755
- type: nauc_map_at_1_diff1
value: 54.43337006606851
- type: nauc_map_at_1_max
value: 34.97044319260595
- type: nauc_map_at_20_diff1
value: 50.00485022476988
- type: nauc_map_at_20_max
value: 35.27772351072898
- type: nauc_map_at_3_diff1
value: 51.101154626515466
- type: nauc_map_at_3_max
value: 35.01188192392699
- type: nauc_map_at_5_diff1
value: 50.85455903915764
- type: nauc_map_at_5_max
value: 35.222924790317556
- type: nauc_mrr_at_1000_diff1
value: 50.469777332868446
- type: nauc_mrr_at_1000_max
value: 36.806239966091134
- type: nauc_mrr_at_100_diff1
value: 50.43774386023924
- type: nauc_mrr_at_100_max
value: 36.81415281954135
- type: nauc_mrr_at_10_diff1
value: 50.65191632415702
- type: nauc_mrr_at_10_max
value: 36.796669555237735
- type: nauc_mrr_at_1_diff1
value: 56.69444714733734
- type: nauc_mrr_at_1_max
value: 37.950238556672936
- type: nauc_mrr_at_20_diff1
value: 50.58591151000893
- type: nauc_mrr_at_20_max
value: 36.74486652850544
- type: nauc_mrr_at_3_diff1
value: 51.874376496561666
- type: nauc_mrr_at_3_max
value: 37.17436215275038
- type: nauc_mrr_at_5_diff1
value: 51.23784797508226
- type: nauc_mrr_at_5_max
value: 36.797843583446976
- type: nauc_ndcg_at_1000_diff1
value: 47.02453368687982
- type: nauc_ndcg_at_1000_max
value: 35.645475773480015
- type: nauc_ndcg_at_100_diff1
value: 46.3919750229052
- type: nauc_ndcg_at_100_max
value: 35.52540027815628
- type: nauc_ndcg_at_10_diff1
value: 47.783397540510954
- type: nauc_ndcg_at_10_max
value: 34.978511185065415
- type: nauc_ndcg_at_1_diff1
value: 56.69444714733734
- type: nauc_ndcg_at_1_max
value: 37.950238556672936
- type: nauc_ndcg_at_20_diff1
value: 47.54223429140186
- type: nauc_ndcg_at_20_max
value: 34.96367876086221
- type: nauc_ndcg_at_3_diff1
value: 49.6543909342716
- type: nauc_ndcg_at_3_max
value: 35.55329341746738
- type: nauc_ndcg_at_5_diff1
value: 49.27526999339532
- type: nauc_ndcg_at_5_max
value: 35.269553382209025
- type: nauc_precision_at_1000_diff1
value: 4.152639847319254
- type: nauc_precision_at_1000_max
value: 19.860239840663723
- type: nauc_precision_at_100_diff1
value: 20.5112056117488
- type: nauc_precision_at_100_max
value: 33.986584990989556
- type: nauc_precision_at_10_diff1
value: 36.61149278468018
- type: nauc_precision_at_10_max
value: 38.71670013842566
- type: nauc_precision_at_1_diff1
value: 56.69444714733734
- type: nauc_precision_at_1_max
value: 37.950238556672936
- type: nauc_precision_at_20_diff1
value: 33.440399762319224
- type: nauc_precision_at_20_max
value: 36.81534392845937
- type: nauc_precision_at_3_diff1
value: 46.57940283237786
- type: nauc_precision_at_3_max
value: 40.2021686353609
- type: nauc_precision_at_5_diff1
value: 43.24878459897664
- type: nauc_precision_at_5_max
value: 39.56338396559801
- type: nauc_recall_at_1000_diff1
value: 30.848441444485346
- type: nauc_recall_at_1000_max
value: 29.206281789808227
- type: nauc_recall_at_100_diff1
value: 30.51257893193596
- type: nauc_recall_at_100_max
value: 31.092511899847807
- type: nauc_recall_at_10_diff1
value: 39.87582825785208
- type: nauc_recall_at_10_max
value: 31.21163188638994
- type: nauc_recall_at_1_diff1
value: 54.43337006606851
- type: nauc_recall_at_1_max
value: 34.97044319260595
- type: nauc_recall_at_20_diff1
value: 38.87715004078399
- type: nauc_recall_at_20_max
value: 30.6988866615607
- type: nauc_recall_at_3_diff1
value: 45.86720941947824
- type: nauc_recall_at_3_max
value: 32.61976516194995
- type: nauc_recall_at_5_diff1
value: 44.153048811811296
- type: nauc_recall_at_5_max
value: 32.11676236101473
- type: ndcg_at_1
value: 21.626
- type: ndcg_at_10
value: 29.859
- type: ndcg_at_100
value: 34.472
- type: ndcg_at_1000
value: 37.038
- type: ndcg_at_20
value: 31.635
- type: ndcg_at_3
value: 25.288
- type: ndcg_at_5
value: 27.855999999999998
- type: precision_at_1
value: 21.626
- type: precision_at_10
value: 4.954
- type: precision_at_100
value: 0.7779999999999999
- type: precision_at_1000
value: 0.108
- type: precision_at_20
value: 2.899
- type: precision_at_3
value: 11.094
- type: precision_at_5
value: 8.251999999999999
- type: recall_at_1
value: 19.383
- type: recall_at_10
value: 40.076
- type: recall_at_100
value: 61.242
- type: recall_at_1000
value: 80.314
- type: recall_at_20
value: 46.758
- type: recall_at_3
value: 27.962999999999997
- type: recall_at_5
value: 34.147
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 13.971
- type: map_at_10
value: 19.858
- type: map_at_100
value: 20.909
- type: map_at_1000
value: 21.047
- type: map_at_20
value: 20.426
- type: map_at_3
value: 17.888
- type: map_at_5
value: 18.948
- type: mrr_at_1
value: 17.377838953888507
- type: mrr_at_10
value: 23.3541168245229
- type: mrr_at_100
value: 24.28548028171366
- type: mrr_at_1000
value: 24.377827358449824
- type: mrr_at_20
value: 23.88638570357191
- type: mrr_at_3
value: 21.36958017894013
- type: mrr_at_5
value: 22.512044046799744
- type: nauc_map_at_1000_diff1
value: 37.755702284175236
- type: nauc_map_at_1000_max
value: 29.985902989676845
- type: nauc_map_at_100_diff1
value: 37.75580303049384
- type: nauc_map_at_100_max
value: 29.948110193131978
- type: nauc_map_at_10_diff1
value: 38.173556250676114
- type: nauc_map_at_10_max
value: 29.977909805969883
- type: nauc_map_at_1_diff1
value: 46.397651367070175
- type: nauc_map_at_1_max
value: 30.5126306843905
- type: nauc_map_at_20_diff1
value: 37.90916483077938
- type: nauc_map_at_20_max
value: 29.968172386265085
- type: nauc_map_at_3_diff1
value: 39.72488280969923
- type: nauc_map_at_3_max
value: 30.188740025946746
- type: nauc_map_at_5_diff1
value: 38.630985019403546
- type: nauc_map_at_5_max
value: 29.879481875315722
- type: nauc_mrr_at_1000_diff1
value: 35.84642181893316
- type: nauc_mrr_at_1000_max
value: 31.670254385165258
- type: nauc_mrr_at_100_diff1
value: 35.83045706635439
- type: nauc_mrr_at_100_max
value: 31.66496126819178
- type: nauc_mrr_at_10_diff1
value: 36.09970407587168
- type: nauc_mrr_at_10_max
value: 31.8396436118298
- type: nauc_mrr_at_1_diff1
value: 43.10155389869899
- type: nauc_mrr_at_1_max
value: 34.16020753098804
- type: nauc_mrr_at_20_diff1
value: 35.896835477664894
- type: nauc_mrr_at_20_max
value: 31.73239216068521
- type: nauc_mrr_at_3_diff1
value: 37.466645069277035
- type: nauc_mrr_at_3_max
value: 32.49242404383092
- type: nauc_mrr_at_5_diff1
value: 36.405092730921915
- type: nauc_mrr_at_5_max
value: 31.92760166562469
- type: nauc_ndcg_at_1000_diff1
value: 33.90603402215118
- type: nauc_ndcg_at_1000_max
value: 29.729336831998786
- type: nauc_ndcg_at_100_diff1
value: 33.50850724053531
- type: nauc_ndcg_at_100_max
value: 29.306492255119792
- type: nauc_ndcg_at_10_diff1
value: 34.97587417274376
- type: nauc_ndcg_at_10_max
value: 29.85287300723461
- type: nauc_ndcg_at_1_diff1
value: 43.10155389869899
- type: nauc_ndcg_at_1_max
value: 34.16020753098804
- type: nauc_ndcg_at_20_diff1
value: 34.20570916045498
- type: nauc_ndcg_at_20_max
value: 29.60222341192852
- type: nauc_ndcg_at_3_diff1
value: 37.339592233518914
- type: nauc_ndcg_at_3_max
value: 31.040627782319678
- type: nauc_ndcg_at_5_diff1
value: 35.77389927348168
- type: nauc_ndcg_at_5_max
value: 29.914791666809233
- type: nauc_precision_at_1000_diff1
value: 2.899363456166234
- type: nauc_precision_at_1000_max
value: 22.023156102458266
- type: nauc_precision_at_100_diff1
value: 11.303787835861796
- type: nauc_precision_at_100_max
value: 26.139688179760295
- type: nauc_precision_at_10_diff1
value: 22.48116840886295
- type: nauc_precision_at_10_max
value: 31.359997435260933
- type: nauc_precision_at_1_diff1
value: 43.10155389869899
- type: nauc_precision_at_1_max
value: 34.16020753098804
- type: nauc_precision_at_20_diff1
value: 19.052342801259027
- type: nauc_precision_at_20_max
value: 30.295818240552265
- type: nauc_precision_at_3_diff1
value: 29.98671296818897
- type: nauc_precision_at_3_max
value: 33.518872240321095
- type: nauc_precision_at_5_diff1
value: 26.140183342571895
- type: nauc_precision_at_5_max
value: 31.731967908857495
- type: nauc_recall_at_1000_diff1
value: 18.21949169710172
- type: nauc_recall_at_1000_max
value: 21.981372080200686
- type: nauc_recall_at_100_diff1
value: 20.257733390464356
- type: nauc_recall_at_100_max
value: 22.351366657877755
- type: nauc_recall_at_10_diff1
value: 27.478094748761023
- type: nauc_recall_at_10_max
value: 25.693669844365168
- type: nauc_recall_at_1_diff1
value: 46.397651367070175
- type: nauc_recall_at_1_max
value: 30.5126306843905
- type: nauc_recall_at_20_diff1
value: 24.426997641435758
- type: nauc_recall_at_20_max
value: 24.371909403981874
- type: nauc_recall_at_3_diff1
value: 33.01864346236186
- type: nauc_recall_at_3_max
value: 27.470654873232615
- type: nauc_recall_at_5_diff1
value: 29.70016272411814
- type: nauc_recall_at_5_max
value: 26.045940185547146
- type: ndcg_at_1
value: 17.378
- type: ndcg_at_10
value: 23.798
- type: ndcg_at_100
value: 28.907
- type: ndcg_at_1000
value: 32.273
- type: ndcg_at_20
value: 25.707
- type: ndcg_at_3
value: 20.172
- type: ndcg_at_5
value: 21.802
- type: precision_at_1
value: 17.378
- type: precision_at_10
value: 4.425
- type: precision_at_100
value: 0.83
- type: precision_at_1000
value: 0.129
- type: precision_at_20
value: 2.796
- type: precision_at_3
value: 9.600999999999999
- type: precision_at_5
value: 6.999
- type: recall_at_1
value: 13.971
- type: recall_at_10
value: 32.24
- type: recall_at_100
value: 55.257
- type: recall_at_1000
value: 79.59
- type: recall_at_20
value: 39.229
- type: recall_at_3
value: 22.067999999999998
- type: recall_at_5
value: 26.293
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 22.913
- type: map_at_10
value: 31.479000000000003
- type: map_at_100
value: 32.552
- type: map_at_1000
value: 32.653
- type: map_at_20
value: 31.990000000000002
- type: map_at_3
value: 28.592000000000002
- type: map_at_5
value: 30.373
- type: mrr_at_1
value: 26.679104477611943
- type: mrr_at_10
value: 35.04575337597722
- type: mrr_at_100
value: 35.95382623195233
- type: mrr_at_1000
value: 36.010139872633495
- type: mrr_at_20
value: 35.53513175560514
- type: mrr_at_3
value: 32.50932835820894
- type: mrr_at_5
value: 34.029850746268615
- type: nauc_map_at_1000_diff1
value: 48.91111448779822
- type: nauc_map_at_1000_max
value: 42.45001990527906
- type: nauc_map_at_100_diff1
value: 48.89190728335153
- type: nauc_map_at_100_max
value: 42.42208663513962
- type: nauc_map_at_10_diff1
value: 49.18102301155058
- type: nauc_map_at_10_max
value: 42.38922838539626
- type: nauc_map_at_1_diff1
value: 56.80065296129456
- type: nauc_map_at_1_max
value: 44.49602807145463
- type: nauc_map_at_20_diff1
value: 48.8774198345571
- type: nauc_map_at_20_max
value: 42.33035822233443
- type: nauc_map_at_3_diff1
value: 50.18578760557955
- type: nauc_map_at_3_max
value: 42.049682832486525
- type: nauc_map_at_5_diff1
value: 49.486124069477185
- type: nauc_map_at_5_max
value: 42.55783777242684
- type: nauc_mrr_at_1000_diff1
value: 49.36889165411647
- type: nauc_mrr_at_1000_max
value: 44.77127656729937
- type: nauc_mrr_at_100_diff1
value: 49.334724602529384
- type: nauc_mrr_at_100_max
value: 44.74054390604988
- type: nauc_mrr_at_10_diff1
value: 49.51295857027244
- type: nauc_mrr_at_10_max
value: 44.9170190483006
- type: nauc_mrr_at_1_diff1
value: 58.082371939479195
- type: nauc_mrr_at_1_max
value: 47.809669154624714
- type: nauc_mrr_at_20_diff1
value: 49.32453628990772
- type: nauc_mrr_at_20_max
value: 44.85511763034043
- type: nauc_mrr_at_3_diff1
value: 50.28814557566801
- type: nauc_mrr_at_3_max
value: 45.02918648542346
- type: nauc_mrr_at_5_diff1
value: 49.70083559303071
- type: nauc_mrr_at_5_max
value: 45.24452033882872
- type: nauc_ndcg_at_1000_diff1
value: 45.98074456611413
- type: nauc_ndcg_at_1000_max
value: 42.31080875816173
- type: nauc_ndcg_at_100_diff1
value: 45.40775721447589
- type: nauc_ndcg_at_100_max
value: 41.66948346471324
- type: nauc_ndcg_at_10_diff1
value: 46.45033224294541
- type: nauc_ndcg_at_10_max
value: 41.94587530702735
- type: nauc_ndcg_at_1_diff1
value: 58.082371939479195
- type: nauc_ndcg_at_1_max
value: 47.809669154624714
- type: nauc_ndcg_at_20_diff1
value: 45.52119345567974
- type: nauc_ndcg_at_20_max
value: 41.689091488310375
- type: nauc_ndcg_at_3_diff1
value: 47.7611351600258
- type: nauc_ndcg_at_3_max
value: 42.12421531952486
- type: nauc_ndcg_at_5_diff1
value: 46.96076236146089
- type: nauc_ndcg_at_5_max
value: 42.48883644550073
- type: nauc_precision_at_1000_diff1
value: -9.994796100724528
- type: nauc_precision_at_1000_max
value: 4.537719884624942
- type: nauc_precision_at_100_diff1
value: 5.176562004472227
- type: nauc_precision_at_100_max
value: 20.363881706230003
- type: nauc_precision_at_10_diff1
value: 28.13817373553328
- type: nauc_precision_at_10_max
value: 35.95475620553222
- type: nauc_precision_at_1_diff1
value: 58.082371939479195
- type: nauc_precision_at_1_max
value: 47.809669154624714
- type: nauc_precision_at_20_diff1
value: 20.22951508884242
- type: nauc_precision_at_20_max
value: 31.772942110564383
- type: nauc_precision_at_3_diff1
value: 39.394928690349715
- type: nauc_precision_at_3_max
value: 41.22337390889921
- type: nauc_precision_at_5_diff1
value: 33.3002766373711
- type: nauc_precision_at_5_max
value: 39.798736765093004
- type: nauc_recall_at_1000_diff1
value: 23.754029850817286
- type: nauc_recall_at_1000_max
value: 33.93123719288166
- type: nauc_recall_at_100_diff1
value: 27.544628904110198
- type: nauc_recall_at_100_max
value: 30.334114782639638
- type: nauc_recall_at_10_diff1
value: 36.834663549355106
- type: nauc_recall_at_10_max
value: 35.98186975365149
- type: nauc_recall_at_1_diff1
value: 56.80065296129456
- type: nauc_recall_at_1_max
value: 44.49602807145463
- type: nauc_recall_at_20_diff1
value: 32.36845729763492
- type: nauc_recall_at_20_max
value: 33.854227154499725
- type: nauc_recall_at_3_diff1
value: 41.3767634461603
- type: nauc_recall_at_3_max
value: 37.614960977002795
- type: nauc_recall_at_5_diff1
value: 38.83337733461508
- type: nauc_recall_at_5_max
value: 38.23953256920769
- type: ndcg_at_1
value: 26.679000000000002
- type: ndcg_at_10
value: 36.620999999999995
- type: ndcg_at_100
value: 41.942
- type: ndcg_at_1000
value: 44.374
- type: ndcg_at_20
value: 38.415
- type: ndcg_at_3
value: 31.538
- type: ndcg_at_5
value: 34.178999999999995
- type: precision_at_1
value: 26.679000000000002
- type: precision_at_10
value: 6.259
- type: precision_at_100
value: 1.006
- type: precision_at_1000
value: 0.133
- type: precision_at_20
value: 3.6290000000000004
- type: precision_at_3
value: 14.335
- type: precision_at_5
value: 10.485
- type: recall_at_1
value: 22.913
- type: recall_at_10
value: 48.716
- type: recall_at_100
value: 72.372
- type: recall_at_1000
value: 89.531
- type: recall_at_20
value: 55.269999999999996
- type: recall_at_3
value: 35.074
- type: recall_at_5
value: 41.537
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 24.111
- type: map_at_10
value: 32.586999999999996
- type: map_at_100
value: 34.12
- type: map_at_1000
value: 34.365
- type: map_at_20
value: 33.357
- type: map_at_3
value: 29.941000000000003
- type: map_at_5
value: 31.576999999999998
- type: mrr_at_1
value: 29.841897233201582
- type: mrr_at_10
value: 37.7974621996361
- type: mrr_at_100
value: 38.70855926216216
- type: mrr_at_1000
value: 38.78160750260466
- type: mrr_at_20
value: 38.28642292998807
- type: mrr_at_3
value: 35.540184453227944
- type: mrr_at_5
value: 36.9631093544137
- type: nauc_map_at_1000_diff1
value: 48.66304930719204
- type: nauc_map_at_1000_max
value: 41.95032096153431
- type: nauc_map_at_100_diff1
value: 48.79765165759156
- type: nauc_map_at_100_max
value: 42.07729091897765
- type: nauc_map_at_10_diff1
value: 48.857016210602815
- type: nauc_map_at_10_max
value: 41.40602447689144
- type: nauc_map_at_1_diff1
value: 56.30285091540639
- type: nauc_map_at_1_max
value: 41.50722701029977
- type: nauc_map_at_20_diff1
value: 48.81899029977082
- type: nauc_map_at_20_max
value: 41.59748805021434
- type: nauc_map_at_3_diff1
value: 50.427376588223524
- type: nauc_map_at_3_max
value: 42.16021881535689
- type: nauc_map_at_5_diff1
value: 49.30376475055564
- type: nauc_map_at_5_max
value: 40.80672149673538
- type: nauc_mrr_at_1000_diff1
value: 46.345186006817585
- type: nauc_mrr_at_1000_max
value: 43.50174997374244
- type: nauc_mrr_at_100_diff1
value: 46.295807235163586
- type: nauc_mrr_at_100_max
value: 43.50031316913962
- type: nauc_mrr_at_10_diff1
value: 46.47773451307542
- type: nauc_mrr_at_10_max
value: 43.458160190412194
- type: nauc_mrr_at_1_diff1
value: 52.74875572154898
- type: nauc_mrr_at_1_max
value: 44.75642046390427
- type: nauc_mrr_at_20_diff1
value: 46.3900630839787
- type: nauc_mrr_at_20_max
value: 43.50880808108073
- type: nauc_mrr_at_3_diff1
value: 48.19554935809951
- type: nauc_mrr_at_3_max
value: 44.6333875693719
- type: nauc_mrr_at_5_diff1
value: 46.93570802196158
- type: nauc_mrr_at_5_max
value: 43.125336922131
- type: nauc_ndcg_at_1000_diff1
value: 45.33511371430761
- type: nauc_ndcg_at_1000_max
value: 42.71458917809584
- type: nauc_ndcg_at_100_diff1
value: 44.722449600774205
- type: nauc_ndcg_at_100_max
value: 42.914978315188804
- type: nauc_ndcg_at_10_diff1
value: 45.227438065629464
- type: nauc_ndcg_at_10_max
value: 41.683893995546676
- type: nauc_ndcg_at_1_diff1
value: 52.74875572154898
- type: nauc_ndcg_at_1_max
value: 44.75642046390427
- type: nauc_ndcg_at_20_diff1
value: 44.55699203003113
- type: nauc_ndcg_at_20_max
value: 41.415096016236674
- type: nauc_ndcg_at_3_diff1
value: 48.10326999177321
- type: nauc_ndcg_at_3_max
value: 44.23088613607569
- type: nauc_ndcg_at_5_diff1
value: 46.38361769072502
- type: nauc_ndcg_at_5_max
value: 41.022399131998824
- type: nauc_precision_at_1000_diff1
value: -14.552159603323588
- type: nauc_precision_at_1000_max
value: -3.1254911408763344
- type: nauc_precision_at_100_diff1
value: -0.23909748335372277
- type: nauc_precision_at_100_max
value: 16.89399096801419
- type: nauc_precision_at_10_diff1
value: 20.38093826207771
- type: nauc_precision_at_10_max
value: 33.609116276999984
- type: nauc_precision_at_1_diff1
value: 52.74875572154898
- type: nauc_precision_at_1_max
value: 44.75642046390427
- type: nauc_precision_at_20_diff1
value: 14.390742980745472
- type: nauc_precision_at_20_max
value: 28.70169752553835
- type: nauc_precision_at_3_diff1
value: 35.95450638656823
- type: nauc_precision_at_3_max
value: 43.164677486075306
- type: nauc_precision_at_5_diff1
value: 28.145486984369327
- type: nauc_precision_at_5_max
value: 35.15010507177889
- type: nauc_recall_at_1000_diff1
value: 18.31197862072379
- type: nauc_recall_at_1000_max
value: 39.21531256166399
- type: nauc_recall_at_100_diff1
value: 25.523825639252152
- type: nauc_recall_at_100_max
value: 41.261468285876326
- type: nauc_recall_at_10_diff1
value: 33.25089536691914
- type: nauc_recall_at_10_max
value: 35.62911707394771
- type: nauc_recall_at_1_diff1
value: 56.30285091540639
- type: nauc_recall_at_1_max
value: 41.50722701029977
- type: nauc_recall_at_20_diff1
value: 29.960428793914986
- type: nauc_recall_at_20_max
value: 35.185821233232936
- type: nauc_recall_at_3_diff1
value: 43.06694028157036
- type: nauc_recall_at_3_max
value: 39.592097716636886
- type: nauc_recall_at_5_diff1
value: 37.78776183699997
- type: nauc_recall_at_5_max
value: 33.29553795647041
- type: ndcg_at_1
value: 29.842000000000002
- type: ndcg_at_10
value: 38.17
- type: ndcg_at_100
value: 43.69
- type: ndcg_at_1000
value: 46.647
- type: ndcg_at_20
value: 40.071
- type: ndcg_at_3
value: 34.095
- type: ndcg_at_5
value: 36.24
- type: precision_at_1
value: 29.842000000000002
- type: precision_at_10
value: 7.273000000000001
- type: precision_at_100
value: 1.47
- type: precision_at_1000
value: 0.23800000000000002
- type: precision_at_20
value: 4.595
- type: precision_at_3
value: 16.073999999999998
- type: precision_at_5
value: 11.738999999999999
- type: recall_at_1
value: 24.111
- type: recall_at_10
value: 48.004000000000005
- type: recall_at_100
value: 72.94699999999999
- type: recall_at_1000
value: 91.888
- type: recall_at_20
value: 55.216
- type: recall_at_3
value: 35.811
- type: recall_at_5
value: 41.89
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 18.028
- type: map_at_10
value: 24.82
- type: map_at_100
value: 25.768
- type: map_at_1000
value: 25.884
- type: map_at_20
value: 25.320999999999998
- type: map_at_3
value: 22.585
- type: map_at_5
value: 23.97
- type: mrr_at_1
value: 19.963031423290204
- type: mrr_at_10
value: 26.794589678138646
- type: mrr_at_100
value: 27.68954610737053
- type: mrr_at_1000
value: 27.779869498156962
- type: mrr_at_20
value: 27.282527769829105
- type: mrr_at_3
value: 24.522489217498453
- type: mrr_at_5
value: 25.89956869993838
- type: nauc_map_at_1000_diff1
value: 29.354656347993824
- type: nauc_map_at_1000_max
value: 25.777632482963902
- type: nauc_map_at_100_diff1
value: 29.344606522701266
- type: nauc_map_at_100_max
value: 25.74884197256006
- type: nauc_map_at_10_diff1
value: 29.263520383732228
- type: nauc_map_at_10_max
value: 25.956046243549185
- type: nauc_map_at_1_diff1
value: 32.10083227622593
- type: nauc_map_at_1_max
value: 26.082934151928335
- type: nauc_map_at_20_diff1
value: 29.252814048502522
- type: nauc_map_at_20_max
value: 25.73021877076165
- type: nauc_map_at_3_diff1
value: 30.402946945740624
- type: nauc_map_at_3_max
value: 25.479666391115586
- type: nauc_map_at_5_diff1
value: 30.112881326178037
- type: nauc_map_at_5_max
value: 26.181377819243316
- type: nauc_mrr_at_1000_diff1
value: 29.36011085257817
- type: nauc_mrr_at_1000_max
value: 26.512397267441905
- type: nauc_mrr_at_100_diff1
value: 29.34716126133153
- type: nauc_mrr_at_100_max
value: 26.492126338060768
- type: nauc_mrr_at_10_diff1
value: 29.291501370936746
- type: nauc_mrr_at_10_max
value: 26.720460613779785
- type: nauc_mrr_at_1_diff1
value: 32.59370226571806
- type: nauc_mrr_at_1_max
value: 27.210974235559654
- type: nauc_mrr_at_20_diff1
value: 29.339683741214824
- type: nauc_mrr_at_20_max
value: 26.565206125640618
- type: nauc_mrr_at_3_diff1
value: 30.615760229041822
- type: nauc_mrr_at_3_max
value: 26.964923490024372
- type: nauc_mrr_at_5_diff1
value: 30.09283430145986
- type: nauc_mrr_at_5_max
value: 26.97653024091469
- type: nauc_ndcg_at_1000_diff1
value: 28.1229347584593
- type: nauc_ndcg_at_1000_max
value: 25.58333272731477
- type: nauc_ndcg_at_100_diff1
value: 27.785141959567877
- type: nauc_ndcg_at_100_max
value: 25.262023849022043
- type: nauc_ndcg_at_10_diff1
value: 27.471970751404058
- type: nauc_ndcg_at_10_max
value: 26.100161257581366
- type: nauc_ndcg_at_1_diff1
value: 32.59370226571806
- type: nauc_ndcg_at_1_max
value: 27.210974235559654
- type: nauc_ndcg_at_20_diff1
value: 27.493348155478664
- type: nauc_ndcg_at_20_max
value: 25.429920376418064
- type: nauc_ndcg_at_3_diff1
value: 30.049291118944282
- type: nauc_ndcg_at_3_max
value: 25.87324242231816
- type: nauc_ndcg_at_5_diff1
value: 29.285225144397696
- type: nauc_ndcg_at_5_max
value: 26.538239428331035
- type: nauc_precision_at_1000_diff1
value: 0.30786182255546335
- type: nauc_precision_at_1000_max
value: -1.122583222669982
- type: nauc_precision_at_100_diff1
value: 16.643044236960975
- type: nauc_precision_at_100_max
value: 16.898061651611094
- type: nauc_precision_at_10_diff1
value: 22.225745733008576
- type: nauc_precision_at_10_max
value: 25.897094746875343
- type: nauc_precision_at_1_diff1
value: 32.59370226571806
- type: nauc_precision_at_1_max
value: 27.210974235559654
- type: nauc_precision_at_20_diff1
value: 21.25369337146511
- type: nauc_precision_at_20_max
value: 22.501776705458052
- type: nauc_precision_at_3_diff1
value: 29.752252136844444
- type: nauc_precision_at_3_max
value: 26.54042152706389
- type: nauc_precision_at_5_diff1
value: 28.014614694166003
- type: nauc_precision_at_5_max
value: 28.096365928752963
- type: nauc_recall_at_1000_diff1
value: 23.27380094527733
- type: nauc_recall_at_1000_max
value: 21.84162629652965
- type: nauc_recall_at_100_diff1
value: 21.646953220120835
- type: nauc_recall_at_100_max
value: 20.865427838218878
- type: nauc_recall_at_10_diff1
value: 21.50707716122318
- type: nauc_recall_at_10_max
value: 25.11348571349948
- type: nauc_recall_at_1_diff1
value: 32.10083227622593
- type: nauc_recall_at_1_max
value: 26.082934151928335
- type: nauc_recall_at_20_diff1
value: 21.149584140557366
- type: nauc_recall_at_20_max
value: 22.687778440936103
- type: nauc_recall_at_3_diff1
value: 28.105583308602416
- type: nauc_recall_at_3_max
value: 24.808247917706677
- type: nauc_recall_at_5_diff1
value: 26.268258977975016
- type: nauc_recall_at_5_max
value: 26.14783487728073
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 28.988999999999997
- type: ndcg_at_100
value: 33.835
- type: ndcg_at_1000
value: 37.07
- type: ndcg_at_20
value: 30.808000000000003
- type: ndcg_at_3
value: 24.581
- type: ndcg_at_5
value: 26.979999999999997
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 4.621
- type: precision_at_100
value: 0.754
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 2.7449999999999997
- type: precision_at_3
value: 10.783
- type: precision_at_5
value: 7.837
- type: recall_at_1
value: 18.028
- type: recall_at_10
value: 39.997
- type: recall_at_100
value: 62.361
- type: recall_at_1000
value: 87.083
- type: recall_at_20
value: 46.936
- type: recall_at_3
value: 28.323999999999998
- type: recall_at_5
value: 34.091
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 6.18
- type: map_at_10
value: 11.277
- type: map_at_100
value: 12.873000000000001
- type: map_at_1000
value: 13.098
- type: map_at_20
value: 12.033000000000001
- type: map_at_3
value: 9.213000000000001
- type: map_at_5
value: 10.179
- type: mrr_at_1
value: 13.615635179153093
- type: mrr_at_10
value: 22.3759371283801
- type: mrr_at_100
value: 23.74043839617308
- type: mrr_at_1000
value: 23.807597594686023
- type: mrr_at_20
value: 23.172313674880147
- type: mrr_at_3
value: 19.15309446254072
- type: mrr_at_5
value: 20.902280130293132
- type: nauc_map_at_1000_diff1
value: 28.07875585225803
- type: nauc_map_at_1000_max
value: 35.83708929348549
- type: nauc_map_at_100_diff1
value: 28.13253751629951
- type: nauc_map_at_100_max
value: 35.74641807227035
- type: nauc_map_at_10_diff1
value: 28.494200987801044
- type: nauc_map_at_10_max
value: 34.583975326049966
- type: nauc_map_at_1_diff1
value: 39.996859549061355
- type: nauc_map_at_1_max
value: 39.270510397353526
- type: nauc_map_at_20_diff1
value: 28.025812489675044
- type: nauc_map_at_20_max
value: 35.177169215298406
- type: nauc_map_at_3_diff1
value: 32.30909470003803
- type: nauc_map_at_3_max
value: 36.37733076481426
- type: nauc_map_at_5_diff1
value: 29.921198050945296
- type: nauc_map_at_5_max
value: 34.94586908456363
- type: nauc_mrr_at_1000_diff1
value: 26.08074340015364
- type: nauc_mrr_at_1000_max
value: 33.02687297171295
- type: nauc_mrr_at_100_diff1
value: 26.054902916636646
- type: nauc_mrr_at_100_max
value: 33.01617535618753
- type: nauc_mrr_at_10_diff1
value: 25.88148928931446
- type: nauc_mrr_at_10_max
value: 32.61799742547017
- type: nauc_mrr_at_1_diff1
value: 36.03138484582729
- type: nauc_mrr_at_1_max
value: 36.277950419702755
- type: nauc_mrr_at_20_diff1
value: 25.974259829372738
- type: nauc_mrr_at_20_max
value: 32.797465492032124
- type: nauc_mrr_at_3_diff1
value: 28.465203801115152
- type: nauc_mrr_at_3_max
value: 33.765630602498895
- type: nauc_mrr_at_5_diff1
value: 26.436793504159496
- type: nauc_mrr_at_5_max
value: 32.62012479096292
- type: nauc_ndcg_at_1000_diff1
value: 23.672305808597077
- type: nauc_ndcg_at_1000_max
value: 37.08958734194627
- type: nauc_ndcg_at_100_diff1
value: 23.944898359513253
- type: nauc_ndcg_at_100_max
value: 35.90189193251265
- type: nauc_ndcg_at_10_diff1
value: 23.75418336772388
- type: nauc_ndcg_at_10_max
value: 32.36026460519453
- type: nauc_ndcg_at_1_diff1
value: 36.03138484582729
- type: nauc_ndcg_at_1_max
value: 36.277950419702755
- type: nauc_ndcg_at_20_diff1
value: 23.079788090123536
- type: nauc_ndcg_at_20_max
value: 33.46154223206268
- type: nauc_ndcg_at_3_diff1
value: 28.612088697615302
- type: nauc_ndcg_at_3_max
value: 34.42302593383679
- type: nauc_ndcg_at_5_diff1
value: 25.671378300813853
- type: nauc_ndcg_at_5_max
value: 32.64433627531644
- type: nauc_precision_at_1000_diff1
value: -0.3052718758543331
- type: nauc_precision_at_1000_max
value: 18.16387047369377
- type: nauc_precision_at_100_diff1
value: 7.956051604214289
- type: nauc_precision_at_100_max
value: 25.73158237184035
- type: nauc_precision_at_10_diff1
value: 10.163750744754575
- type: nauc_precision_at_10_max
value: 26.00696664055645
- type: nauc_precision_at_1_diff1
value: 36.03138484582729
- type: nauc_precision_at_1_max
value: 36.277950419702755
- type: nauc_precision_at_20_diff1
value: 8.314279554483267
- type: nauc_precision_at_20_max
value: 27.115560839727205
- type: nauc_precision_at_3_diff1
value: 21.690833394205402
- type: nauc_precision_at_3_max
value: 32.156497547258645
- type: nauc_precision_at_5_diff1
value: 15.032164068271097
- type: nauc_precision_at_5_max
value: 28.07574462716985
- type: nauc_recall_at_1000_diff1
value: 10.738783260564638
- type: nauc_recall_at_1000_max
value: 35.56192865936298
- type: nauc_recall_at_100_diff1
value: 14.296129488159078
- type: nauc_recall_at_100_max
value: 30.18732747422109
- type: nauc_recall_at_10_diff1
value: 15.306405656539345
- type: nauc_recall_at_10_max
value: 25.621311074599586
- type: nauc_recall_at_1_diff1
value: 39.996859549061355
- type: nauc_recall_at_1_max
value: 39.270510397353526
- type: nauc_recall_at_20_diff1
value: 13.106484977306085
- type: nauc_recall_at_20_max
value: 26.123290353715582
- type: nauc_recall_at_3_diff1
value: 25.514669409942993
- type: nauc_recall_at_3_max
value: 31.83833126301056
- type: nauc_recall_at_5_diff1
value: 18.916230651494935
- type: nauc_recall_at_5_max
value: 27.401847297469374
- type: ndcg_at_1
value: 13.616
- type: ndcg_at_10
value: 17.005
- type: ndcg_at_100
value: 24.686
- type: ndcg_at_1000
value: 29.037000000000003
- type: ndcg_at_20
value: 19.575
- type: ndcg_at_3
value: 12.867
- type: ndcg_at_5
value: 14.313
- type: precision_at_1
value: 13.616
- type: precision_at_10
value: 5.603000000000001
- type: precision_at_100
value: 1.384
- type: precision_at_1000
value: 0.218
- type: precision_at_20
value: 3.8629999999999995
- type: precision_at_3
value: 9.62
- type: precision_at_5
value: 7.686999999999999
- type: recall_at_1
value: 6.18
- type: recall_at_10
value: 22.253999999999998
- type: recall_at_100
value: 49.857
- type: recall_at_1000
value: 74.509
- type: recall_at_20
value: 29.79
- type: recall_at_3
value: 12.471
- type: recall_at_5
value: 16.154
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 8.291
- type: map_at_10
value: 16.830000000000002
- type: map_at_100
value: 23.034
- type: map_at_1000
value: 24.512999999999998
- type: map_at_20
value: 19.091
- type: map_at_3
value: 12.845999999999998
- type: map_at_5
value: 14.402000000000001
- type: mrr_at_1
value: 58.75
- type: mrr_at_10
value: 68.33035714285714
- type: mrr_at_100
value: 68.71251724781709
- type: mrr_at_1000
value: 68.72409210627302
- type: mrr_at_20
value: 68.47271810039805
- type: mrr_at_3
value: 66.25
- type: mrr_at_5
value: 67.575
- type: nauc_map_at_1000_diff1
value: 27.342350846844752
- type: nauc_map_at_1000_max
value: 14.188457616813544
- type: nauc_map_at_100_diff1
value: 27.710778521551514
- type: nauc_map_at_100_max
value: 11.818644063311075
- type: nauc_map_at_10_diff1
value: 28.06853470498657
- type: nauc_map_at_10_max
value: -0.12310961533344932
- type: nauc_map_at_1_diff1
value: 36.28948693750488
- type: nauc_map_at_1_max
value: -8.633353634959779
- type: nauc_map_at_20_diff1
value: 27.934994633916343
- type: nauc_map_at_20_max
value: 4.66682483439594
- type: nauc_map_at_3_diff1
value: 28.558592385022912
- type: nauc_map_at_3_max
value: -6.5695573701219905
- type: nauc_map_at_5_diff1
value: 28.15758211700583
- type: nauc_map_at_5_max
value: -4.438837627387033
- type: nauc_mrr_at_1000_diff1
value: 32.88741935239777
- type: nauc_mrr_at_1000_max
value: 36.85520170797062
- type: nauc_mrr_at_100_diff1
value: 32.89939422961155
- type: nauc_mrr_at_100_max
value: 36.86316603211002
- type: nauc_mrr_at_10_diff1
value: 32.75700243218855
- type: nauc_mrr_at_10_max
value: 37.04714292361367
- type: nauc_mrr_at_1_diff1
value: 34.13253827440505
- type: nauc_mrr_at_1_max
value: 33.77037186076712
- type: nauc_mrr_at_20_diff1
value: 32.80415591728181
- type: nauc_mrr_at_20_max
value: 36.99253314411835
- type: nauc_mrr_at_3_diff1
value: 32.586811805982016
- type: nauc_mrr_at_3_max
value: 36.12324135058626
- type: nauc_mrr_at_5_diff1
value: 33.03671664465732
- type: nauc_mrr_at_5_max
value: 36.97824557212217
- type: nauc_ndcg_at_1000_diff1
value: 28.337772149714162
- type: nauc_ndcg_at_1000_max
value: 25.377883958551458
- type: nauc_ndcg_at_100_diff1
value: 28.41718749082906
- type: nauc_ndcg_at_100_max
value: 18.123074545882446
- type: nauc_ndcg_at_10_diff1
value: 27.74101109260006
- type: nauc_ndcg_at_10_max
value: 17.69386105978732
- type: nauc_ndcg_at_1_diff1
value: 31.913034148097413
- type: nauc_ndcg_at_1_max
value: 26.268179898794834
- type: nauc_ndcg_at_20_diff1
value: 27.916849997321297
- type: nauc_ndcg_at_20_max
value: 15.44589123370981
- type: nauc_ndcg_at_3_diff1
value: 26.337487002010345
- type: nauc_ndcg_at_3_max
value: 22.565903592425173
- type: nauc_ndcg_at_5_diff1
value: 26.902286744591574
- type: nauc_ndcg_at_5_max
value: 20.51307548413365
- type: nauc_precision_at_1000_diff1
value: -10.757573462902606
- type: nauc_precision_at_1000_max
value: 33.20717701523462
- type: nauc_precision_at_100_diff1
value: 4.142228479859455
- type: nauc_precision_at_100_max
value: 35.16404294730987
- type: nauc_precision_at_10_diff1
value: 12.469406451834095
- type: nauc_precision_at_10_max
value: 37.9712993631463
- type: nauc_precision_at_1_diff1
value: 34.13253827440505
- type: nauc_precision_at_1_max
value: 33.77037186076712
- type: nauc_precision_at_20_diff1
value: 9.319269643229944
- type: nauc_precision_at_20_max
value: 37.327643501927774
- type: nauc_precision_at_3_diff1
value: 16.255946329046314
- type: nauc_precision_at_3_max
value: 32.91012044795064
- type: nauc_precision_at_5_diff1
value: 14.45882910211086
- type: nauc_precision_at_5_max
value: 37.335976736380175
- type: nauc_recall_at_1000_diff1
value: 21.567656720257148
- type: nauc_recall_at_1000_max
value: 21.46402442471943
- type: nauc_recall_at_100_diff1
value: 22.39134946314963
- type: nauc_recall_at_100_max
value: 11.668568431971867
- type: nauc_recall_at_10_diff1
value: 23.937310413594133
- type: nauc_recall_at_10_max
value: -1.3011457352984097
- type: nauc_recall_at_1_diff1
value: 36.28948693750488
- type: nauc_recall_at_1_max
value: -8.633353634959779
- type: nauc_recall_at_20_diff1
value: 23.638781595544128
- type: nauc_recall_at_20_max
value: 5.460347213047418
- type: nauc_recall_at_3_diff1
value: 26.087367269108725
- type: nauc_recall_at_3_max
value: -8.392789032179113
- type: nauc_recall_at_5_diff1
value: 24.741724571041253
- type: nauc_recall_at_5_max
value: -6.393820851085444
- type: ndcg_at_1
value: 47.25
- type: ndcg_at_10
value: 35.081
- type: ndcg_at_100
value: 39.969
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_20
value: 34.547
- type: ndcg_at_3
value: 39.304
- type: ndcg_at_5
value: 36.79
- type: precision_at_1
value: 58.75
- type: precision_at_10
value: 26.85
- type: precision_at_100
value: 8.703
- type: precision_at_1000
value: 1.8980000000000001
- type: precision_at_20
value: 20.037
- type: precision_at_3
value: 42.083
- type: precision_at_5
value: 35.099999999999994
- type: recall_at_1
value: 8.291
- type: recall_at_10
value: 22.274
- type: recall_at_100
value: 46.904
- type: recall_at_1000
value: 71.71
- type: recall_at_20
value: 27.898
- type: recall_at_3
value: 14.213999999999999
- type: recall_at_5
value: 17.035
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.11
- type: f1
value: 41.2294008856841
- type: f1_weighted
value: 49.015064693968355
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 40.014
- type: map_at_10
value: 54.081999999999994
- type: map_at_100
value: 54.698
- type: map_at_1000
value: 54.72
- type: map_at_20
value: 54.48
- type: map_at_3
value: 50.809000000000005
- type: map_at_5
value: 52.79899999999999
- type: mrr_at_1
value: 43.009300930093005
- type: mrr_at_10
value: 57.503827763729085
- type: mrr_at_100
value: 58.049446560229754
- type: mrr_at_1000
value: 58.0600709633308
- type: mrr_at_20
value: 57.875221009859665
- type: mrr_at_3
value: 54.25292529252964
- type: mrr_at_5
value: 56.24212421242145
- type: nauc_map_at_1000_diff1
value: 37.53159168913937
- type: nauc_map_at_1000_max
value: 6.8836860808007465
- type: nauc_map_at_100_diff1
value: 37.528011745089835
- type: nauc_map_at_100_max
value: 6.882491514393514
- type: nauc_map_at_10_diff1
value: 37.3772602442582
- type: nauc_map_at_10_max
value: 6.941476394099929
- type: nauc_map_at_1_diff1
value: 41.86294360198839
- type: nauc_map_at_1_max
value: 6.023892954634666
- type: nauc_map_at_20_diff1
value: 37.427127392559335
- type: nauc_map_at_20_max
value: 6.882487617592033
- type: nauc_map_at_3_diff1
value: 37.20054603111901
- type: nauc_map_at_3_max
value: 6.161906578752757
- type: nauc_map_at_5_diff1
value: 37.07890205012265
- type: nauc_map_at_5_max
value: 6.693250003060114
- type: nauc_mrr_at_1000_diff1
value: 40.01409327276433
- type: nauc_mrr_at_1000_max
value: 6.986140706019336
- type: nauc_mrr_at_100_diff1
value: 40.01603614242155
- type: nauc_mrr_at_100_max
value: 6.994969349070441
- type: nauc_mrr_at_10_diff1
value: 39.88645059936797
- type: nauc_mrr_at_10_max
value: 7.128874648135309
- type: nauc_mrr_at_1_diff1
value: 44.010610515801766
- type: nauc_mrr_at_1_max
value: 5.9421616518924
- type: nauc_mrr_at_20_diff1
value: 39.91760958564354
- type: nauc_mrr_at_20_max
value: 7.001496247992628
- type: nauc_mrr_at_3_diff1
value: 39.55814635404465
- type: nauc_mrr_at_3_max
value: 6.348916278839159
- type: nauc_mrr_at_5_diff1
value: 39.53111951703581
- type: nauc_mrr_at_5_max
value: 6.90608845386575
- type: nauc_ndcg_at_1000_diff1
value: 37.410083910100454
- type: nauc_ndcg_at_1000_max
value: 7.702794760075665
- type: nauc_ndcg_at_100_diff1
value: 37.24367644094166
- type: nauc_ndcg_at_100_max
value: 7.748369388619449
- type: nauc_ndcg_at_10_diff1
value: 36.42739864037634
- type: nauc_ndcg_at_10_max
value: 8.023602519440981
- type: nauc_ndcg_at_1_diff1
value: 44.010610515801766
- type: nauc_ndcg_at_1_max
value: 5.9421616518924
- type: nauc_ndcg_at_20_diff1
value: 36.532843433534836
- type: nauc_ndcg_at_20_max
value: 7.736575556014484
- type: nauc_ndcg_at_3_diff1
value: 36.195062467683485
- type: nauc_ndcg_at_3_max
value: 6.351990387166732
- type: nauc_ndcg_at_5_diff1
value: 35.823128734687835
- type: nauc_ndcg_at_5_max
value: 7.397593069596111
- type: nauc_precision_at_1000_diff1
value: -3.5385176540536514
- type: nauc_precision_at_1000_max
value: 6.514978387518318
- type: nauc_precision_at_100_diff1
value: 3.0567737052705435
- type: nauc_precision_at_100_max
value: 9.197755692738543
- type: nauc_precision_at_10_diff1
value: 22.124829864658473
- type: nauc_precision_at_10_max
value: 13.41228356156045
- type: nauc_precision_at_1_diff1
value: 44.010610515801766
- type: nauc_precision_at_1_max
value: 5.9421616518924
- type: nauc_precision_at_20_diff1
value: 14.960320394382231
- type: nauc_precision_at_20_max
value: 12.26648981111606
- type: nauc_precision_at_3_diff1
value: 31.130457883879092
- type: nauc_precision_at_3_max
value: 6.82126659383764
- type: nauc_precision_at_5_diff1
value: 26.884225039660837
- type: nauc_precision_at_5_max
value: 10.124900855966871
- type: nauc_recall_at_1000_diff1
value: 17.368406898729038
- type: nauc_recall_at_1000_max
value: 20.346937130202836
- type: nauc_recall_at_100_diff1
value: 20.167019111696035
- type: nauc_recall_at_100_max
value: 15.751973809238756
- type: nauc_recall_at_10_diff1
value: 24.579485696194457
- type: nauc_recall_at_10_max
value: 12.597490151740628
- type: nauc_recall_at_1_diff1
value: 41.86294360198839
- type: nauc_recall_at_1_max
value: 6.023892954634666
- type: nauc_recall_at_20_diff1
value: 21.09351386111729
- type: nauc_recall_at_20_max
value: 11.699302732967896
- type: nauc_recall_at_3_diff1
value: 29.17762647595018
- type: nauc_recall_at_3_max
value: 6.461951154139278
- type: nauc_recall_at_5_diff1
value: 26.266588024737242
- type: nauc_recall_at_5_max
value: 9.344922507705471
- type: ndcg_at_1
value: 43.009
- type: ndcg_at_10
value: 61.541999999999994
- type: ndcg_at_100
value: 64.292
- type: ndcg_at_1000
value: 64.781
- type: ndcg_at_20
value: 62.925
- type: ndcg_at_3
value: 55.095000000000006
- type: ndcg_at_5
value: 58.557
- type: precision_at_1
value: 43.009
- type: precision_at_10
value: 8.863
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 4.742
- type: precision_at_3
value: 23.077
- type: precision_at_5
value: 15.701
- type: recall_at_1
value: 40.014
- type: recall_at_10
value: 81.319
- type: recall_at_100
value: 93.362
- type: recall_at_1000
value: 96.72500000000001
- type: recall_at_20
value: 86.54899999999999
- type: recall_at_3
value: 63.975
- type: recall_at_5
value: 72.361
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 14.754999999999999
- type: map_at_10
value: 23.415
- type: map_at_100
value: 25.389
- type: map_at_1000
value: 25.583
- type: map_at_20
value: 24.471999999999998
- type: map_at_3
value: 20.706
- type: map_at_5
value: 22.137
- type: mrr_at_1
value: 29.938271604938272
- type: mrr_at_10
value: 37.82272682735643
- type: mrr_at_100
value: 38.93217561179166
- type: mrr_at_1000
value: 38.993419651389594
- type: mrr_at_20
value: 38.51688181126462
- type: mrr_at_3
value: 35.49382716049384
- type: mrr_at_5
value: 36.69753086419753
- type: nauc_map_at_1000_diff1
value: 39.87836835005687
- type: nauc_map_at_1000_max
value: 27.964955726459046
- type: nauc_map_at_100_diff1
value: 39.85804813979803
- type: nauc_map_at_100_max
value: 27.901085428248518
- type: nauc_map_at_10_diff1
value: 40.40910564822785
- type: nauc_map_at_10_max
value: 27.072531521163906
- type: nauc_map_at_1_diff1
value: 45.20997932526616
- type: nauc_map_at_1_max
value: 24.648676414998096
- type: nauc_map_at_20_diff1
value: 40.028106498724725
- type: nauc_map_at_20_max
value: 27.60132226396262
- type: nauc_map_at_3_diff1
value: 41.05603264971634
- type: nauc_map_at_3_max
value: 26.25426542015784
- type: nauc_map_at_5_diff1
value: 41.00888735158693
- type: nauc_map_at_5_max
value: 27.191449534557897
- type: nauc_mrr_at_1000_diff1
value: 41.89281932766717
- type: nauc_mrr_at_1000_max
value: 35.99284962374552
- type: nauc_mrr_at_100_diff1
value: 41.8809308098677
- type: nauc_mrr_at_100_max
value: 35.99275706919067
- type: nauc_mrr_at_10_diff1
value: 42.034104043793334
- type: nauc_mrr_at_10_max
value: 35.76467432283676
- type: nauc_mrr_at_1_diff1
value: 46.49733800476976
- type: nauc_mrr_at_1_max
value: 37.4658949472576
- type: nauc_mrr_at_20_diff1
value: 41.748351174212104
- type: nauc_mrr_at_20_max
value: 35.9625951228368
- type: nauc_mrr_at_3_diff1
value: 43.03598990961282
- type: nauc_mrr_at_3_max
value: 36.15714656335092
- type: nauc_mrr_at_5_diff1
value: 42.82409168973717
- type: nauc_mrr_at_5_max
value: 36.196423200277614
- type: nauc_ndcg_at_1000_diff1
value: 37.73166244969865
- type: nauc_ndcg_at_1000_max
value: 31.381646846120077
- type: nauc_ndcg_at_100_diff1
value: 37.23016841045615
- type: nauc_ndcg_at_100_max
value: 30.46125187550548
- type: nauc_ndcg_at_10_diff1
value: 38.973941698385175
- type: nauc_ndcg_at_10_max
value: 28.50687601887441
- type: nauc_ndcg_at_1_diff1
value: 46.49733800476976
- type: nauc_ndcg_at_1_max
value: 37.4658949472576
- type: nauc_ndcg_at_20_diff1
value: 37.789595359314184
- type: nauc_ndcg_at_20_max
value: 29.22246454793801
- type: nauc_ndcg_at_3_diff1
value: 41.76335349611918
- type: nauc_ndcg_at_3_max
value: 32.83475312409831
- type: nauc_ndcg_at_5_diff1
value: 41.107481866851366
- type: nauc_ndcg_at_5_max
value: 30.504284365072547
- type: nauc_precision_at_1000_diff1
value: 2.5166465109243528
- type: nauc_precision_at_1000_max
value: 24.899624433788485
- type: nauc_precision_at_100_diff1
value: 10.522599150582685
- type: nauc_precision_at_100_max
value: 29.50873923975521
- type: nauc_precision_at_10_diff1
value: 27.120890973936778
- type: nauc_precision_at_10_max
value: 32.955042844361046
- type: nauc_precision_at_1_diff1
value: 46.49733800476976
- type: nauc_precision_at_1_max
value: 37.4658949472576
- type: nauc_precision_at_20_diff1
value: 20.299314592682165
- type: nauc_precision_at_20_max
value: 33.25503440470617
- type: nauc_precision_at_3_diff1
value: 35.273519611653626
- type: nauc_precision_at_3_max
value: 36.33062698057323
- type: nauc_precision_at_5_diff1
value: 33.590199611021006
- type: nauc_precision_at_5_max
value: 36.68276540744115
- type: nauc_recall_at_1000_diff1
value: 8.618184855067168
- type: nauc_recall_at_1000_max
value: 21.822927142441923
- type: nauc_recall_at_100_diff1
value: 17.055234719322666
- type: nauc_recall_at_100_max
value: 19.555357018393373
- type: nauc_recall_at_10_diff1
value: 28.06102408728667
- type: nauc_recall_at_10_max
value: 19.309443630849117
- type: nauc_recall_at_1_diff1
value: 45.20997932526616
- type: nauc_recall_at_1_max
value: 24.648676414998096
- type: nauc_recall_at_20_diff1
value: 23.428917950250966
- type: nauc_recall_at_20_max
value: 19.351922857081917
- type: nauc_recall_at_3_diff1
value: 35.290875728597044
- type: nauc_recall_at_3_max
value: 22.777706184795857
- type: nauc_recall_at_5_diff1
value: 33.580206464404554
- type: nauc_recall_at_5_max
value: 22.290191050692897
- type: ndcg_at_1
value: 29.938
- type: ndcg_at_10
value: 29.974
- type: ndcg_at_100
value: 37.791999999999994
- type: ndcg_at_1000
value: 41.525
- type: ndcg_at_20
value: 33.032000000000004
- type: ndcg_at_3
value: 27.418
- type: ndcg_at_5
value: 27.925
- type: precision_at_1
value: 29.938
- type: precision_at_10
value: 8.41
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22599999999999998
- type: precision_at_20
value: 5.409
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.456999999999999
- type: recall_at_1
value: 14.754999999999999
- type: recall_at_10
value: 34.863
- type: recall_at_100
value: 64.471
- type: recall_at_1000
value: 87.386
- type: recall_at_20
value: 44.574999999999996
- type: recall_at_3
value: 24.359
- type: recall_at_5
value: 28.442
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 35.476
- type: map_at_10
value: 55.208
- type: map_at_100
value: 56.152
- type: map_at_1000
value: 56.22200000000001
- type: map_at_20
value: 55.754000000000005
- type: map_at_3
value: 51.742999999999995
- type: map_at_5
value: 53.900000000000006
- type: mrr_at_1
value: 70.95205941931127
- type: mrr_at_10
value: 78.42775687812804
- type: mrr_at_100
value: 78.68481706123845
- type: mrr_at_1000
value: 78.69468662644054
- type: mrr_at_20
value: 78.59334220969656
- type: mrr_at_3
value: 77.21584514967346
- type: mrr_at_5
value: 78.01395453522359
- type: nauc_map_at_1000_diff1
value: 25.86438441197036
- type: nauc_map_at_1000_max
value: 24.208328720722367
- type: nauc_map_at_100_diff1
value: 25.836404334583698
- type: nauc_map_at_100_max
value: 24.195017226851576
- type: nauc_map_at_10_diff1
value: 25.722782423795227
- type: nauc_map_at_10_max
value: 23.987656322283062
- type: nauc_map_at_1_diff1
value: 65.41482575591807
- type: nauc_map_at_1_max
value: 36.750296668618084
- type: nauc_map_at_20_diff1
value: 25.748480864914974
- type: nauc_map_at_20_max
value: 24.135897754765367
- type: nauc_map_at_3_diff1
value: 26.38356826015294
- type: nauc_map_at_3_max
value: 23.621729891258973
- type: nauc_map_at_5_diff1
value: 26.0945105687306
- type: nauc_map_at_5_max
value: 23.988655536143728
- type: nauc_mrr_at_1000_diff1
value: 64.17645353489316
- type: nauc_mrr_at_1000_max
value: 39.676745401203924
- type: nauc_mrr_at_100_diff1
value: 64.17493074038984
- type: nauc_mrr_at_100_max
value: 39.677377348237144
- type: nauc_mrr_at_10_diff1
value: 64.11955469392967
- type: nauc_mrr_at_10_max
value: 39.82247333641905
- type: nauc_mrr_at_1_diff1
value: 65.41482575591807
- type: nauc_mrr_at_1_max
value: 36.750296668618084
- type: nauc_mrr_at_20_diff1
value: 64.15026091520691
- type: nauc_mrr_at_20_max
value: 39.73722517116609
- type: nauc_mrr_at_3_diff1
value: 63.9908644633026
- type: nauc_mrr_at_3_max
value: 39.70016359929288
- type: nauc_mrr_at_5_diff1
value: 64.13062355820735
- type: nauc_mrr_at_5_max
value: 39.90280825011535
- type: nauc_ndcg_at_1000_diff1
value: 31.23415779213086
- type: nauc_ndcg_at_1000_max
value: 27.776674450652884
- type: nauc_ndcg_at_100_diff1
value: 30.35823573217142
- type: nauc_ndcg_at_100_max
value: 27.34738082248897
- type: nauc_ndcg_at_10_diff1
value: 30.106808177162254
- type: nauc_ndcg_at_10_max
value: 26.953412093415036
- type: nauc_ndcg_at_1_diff1
value: 65.41482575591807
- type: nauc_ndcg_at_1_max
value: 36.750296668618084
- type: nauc_ndcg_at_20_diff1
value: 29.95512057702211
- type: nauc_ndcg_at_20_max
value: 27.167767360883598
- type: nauc_ndcg_at_3_diff1
value: 31.769187019307232
- type: nauc_ndcg_at_3_max
value: 26.73293484943561
- type: nauc_ndcg_at_5_diff1
value: 30.990681805979882
- type: nauc_ndcg_at_5_max
value: 27.06261773043383
- type: nauc_precision_at_1000_diff1
value: 2.7411864902028804
- type: nauc_precision_at_1000_max
value: 21.87546627838
- type: nauc_precision_at_100_diff1
value: 6.07948946116136
- type: nauc_precision_at_100_max
value: 19.74129594300319
- type: nauc_precision_at_10_diff1
value: 12.773005438330138
- type: nauc_precision_at_10_max
value: 21.240483881434862
- type: nauc_precision_at_1_diff1
value: 65.41482575591807
- type: nauc_precision_at_1_max
value: 36.750296668618084
- type: nauc_precision_at_20_diff1
value: 10.059987751705838
- type: nauc_precision_at_20_max
value: 20.886351066202028
- type: nauc_precision_at_3_diff1
value: 19.42571093704271
- type: nauc_precision_at_3_max
value: 22.725022430676766
- type: nauc_precision_at_5_diff1
value: 16.630466686718133
- type: nauc_precision_at_5_max
value: 22.43986381185507
- type: nauc_recall_at_1000_diff1
value: 2.741186490202987
- type: nauc_recall_at_1000_max
value: 21.875466278380117
- type: nauc_recall_at_100_diff1
value: 6.079489461161415
- type: nauc_recall_at_100_max
value: 19.74129594300323
- type: nauc_recall_at_10_diff1
value: 12.77300543833007
- type: nauc_recall_at_10_max
value: 21.24048388143479
- type: nauc_recall_at_1_diff1
value: 65.41482575591807
- type: nauc_recall_at_1_max
value: 36.750296668618084
- type: nauc_recall_at_20_diff1
value: 10.059987751705897
- type: nauc_recall_at_20_max
value: 20.88635106620212
- type: nauc_recall_at_3_diff1
value: 19.42571093704268
- type: nauc_recall_at_3_max
value: 22.725022430676713
- type: nauc_recall_at_5_diff1
value: 16.630466686718112
- type: nauc_recall_at_5_max
value: 22.439863811855034
- type: ndcg_at_1
value: 70.952
- type: ndcg_at_10
value: 64.154
- type: ndcg_at_100
value: 67.54299999999999
- type: ndcg_at_1000
value: 68.931
- type: ndcg_at_20
value: 65.556
- type: ndcg_at_3
value: 59.10300000000001
- type: ndcg_at_5
value: 61.92100000000001
- type: precision_at_1
value: 70.952
- type: precision_at_10
value: 13.611999999999998
- type: precision_at_100
value: 1.627
- type: precision_at_1000
value: 0.181
- type: precision_at_20
value: 7.258000000000001
- type: precision_at_3
value: 37.929
- type: precision_at_5
value: 24.986
- type: recall_at_1
value: 35.476
- type: recall_at_10
value: 68.062
- type: recall_at_100
value: 81.34400000000001
- type: recall_at_1000
value: 90.567
- type: recall_at_20
value: 72.57900000000001
- type: recall_at_3
value: 56.894
- type: recall_at_5
value: 62.465
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 87.392
- type: ap
value: 82.30893640195251
- type: ap_weighted
value: 82.30893640195251
- type: f1
value: 87.35348856878436
- type: f1_weighted
value: 87.35348856878437
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: test
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 37.701
- type: map_at_1
value: 19.483
- type: map_at_10
value: 31.148
- type: map_at_100
value: 32.402
- type: map_at_1000
value: 32.462
- type: map_at_20
value: 31.921
- type: map_at_3
value: 27.509
- type: map_at_5
value: 29.521
- type: mrr_at_1
value: 19.95702005730659
- type: mrr_at_10
value: 31.629388957110926
- type: mrr_at_100
value: 32.835516469756676
- type: mrr_at_1000
value: 32.889168712361034
- type: mrr_at_20
value: 32.38456688305533
- type: mrr_at_3
value: 28.04680038204374
- type: mrr_at_5
value: 30.005969436485113
- type: nauc_map_at_1000_diff1
value: 32.10821100880179
- type: nauc_map_at_1000_max
value: 0.7085504734279151
- type: nauc_map_at_1000_std
value: -9.657081445961802
- type: nauc_map_at_100_diff1
value: 32.1022582818382
- type: nauc_map_at_100_max
value: 0.7094263225288184
- type: nauc_map_at_100_std
value: -9.615141441552195
- type: nauc_map_at_10_diff1
value: 32.12324150765439
- type: nauc_map_at_10_max
value: 0.6031410503880614
- type: nauc_map_at_10_std
value: -10.446787175853538
- type: nauc_map_at_1_diff1
value: 33.742442076477175
- type: nauc_map_at_1_max
value: -0.4682625514356794
- type: nauc_map_at_1_std
value: -10.128931659708204
- type: nauc_map_at_20_diff1
value: 32.0860174618957
- type: nauc_map_at_20_max
value: 0.6702984410061132
- type: nauc_map_at_20_std
value: -9.906667239123404
- type: nauc_map_at_3_diff1
value: 32.29226575652717
- type: nauc_map_at_3_max
value: 0.0973438744261487
- type: nauc_map_at_3_std
value: -11.124768365213317
- type: nauc_map_at_5_diff1
value: 32.08229672223041
- type: nauc_map_at_5_max
value: 0.2176450911149463
- type: nauc_map_at_5_std
value: -11.055413229428249
- type: nauc_mrr_at_1000_diff1
value: 31.895348209132734
- type: nauc_mrr_at_1000_max
value: 0.7016011591304079
- type: nauc_mrr_at_1000_std
value: -9.67662400401361
- type: nauc_mrr_at_100_diff1
value: 31.8866191175737
- type: nauc_mrr_at_100_max
value: 0.7081721058731258
- type: nauc_mrr_at_100_std
value: -9.633818059393716
- type: nauc_mrr_at_10_diff1
value: 31.89471221387619
- type: nauc_mrr_at_10_max
value: 0.6339660994085835
- type: nauc_mrr_at_10_std
value: -10.40302377816781
- type: nauc_mrr_at_1_diff1
value: 33.51497482281671
- type: nauc_mrr_at_1_max
value: -0.38461440436551425
- type: nauc_mrr_at_1_std
value: -10.229237135864578
- type: nauc_mrr_at_20_diff1
value: 31.858413430545546
- type: nauc_mrr_at_20_max
value: 0.6976488035968785
- type: nauc_mrr_at_20_std
value: -9.889412090836752
- type: nauc_mrr_at_3_diff1
value: 32.07820212397262
- type: nauc_mrr_at_3_max
value: 0.0237815936067523
- type: nauc_mrr_at_3_std
value: -11.225901208970873
- type: nauc_mrr_at_5_diff1
value: 31.880984335275738
- type: nauc_mrr_at_5_max
value: 0.199684088046682
- type: nauc_mrr_at_5_std
value: -11.076286377930138
- type: nauc_ndcg_at_1000_diff1
value: 31.54073203534171
- type: nauc_ndcg_at_1000_max
value: 1.865885878739077
- type: nauc_ndcg_at_1000_std
value: -7.111772836122038
- type: nauc_ndcg_at_100_diff1
value: 31.37901877227891
- type: nauc_ndcg_at_100_max
value: 1.9480183608898676
- type: nauc_ndcg_at_100_std
value: -5.691552787254062
- type: nauc_ndcg_at_10_diff1
value: 31.432464505042674
- type: nauc_ndcg_at_10_max
value: 1.37534724565308
- type: nauc_ndcg_at_10_std
value: -9.688757807073005
- type: nauc_ndcg_at_1_diff1
value: 33.44622804608476
- type: nauc_ndcg_at_1_max
value: -0.4275117298270247
- type: nauc_ndcg_at_1_std
value: -10.195443857766566
- type: nauc_ndcg_at_20_diff1
value: 31.27757208526727
- type: nauc_ndcg_at_20_max
value: 1.6486846485417688
- type: nauc_ndcg_at_20_std
value: -7.731364695550273
- type: nauc_ndcg_at_3_diff1
value: 31.833281451625755
- type: nauc_ndcg_at_3_max
value: 0.248993633296169
- type: nauc_ndcg_at_3_std
value: -11.375142227624853
- type: nauc_ndcg_at_5_diff1
value: 31.450773538855685
- type: nauc_ndcg_at_5_max
value: 0.4439465859365249
- type: nauc_ndcg_at_5_std
value: -11.21243082848607
- type: nauc_precision_at_1000_diff1
value: -3.189589259900143
- type: nauc_precision_at_1000_max
value: 16.455244940685287
- type: nauc_precision_at_1000_std
value: 14.259564795238846
- type: nauc_precision_at_100_diff1
value: 14.87654355875184
- type: nauc_precision_at_100_max
value: 11.935690745346687
- type: nauc_precision_at_100_std
value: 23.08831102598726
- type: nauc_precision_at_10_diff1
value: 27.37421025197246
- type: nauc_precision_at_10_max
value: 4.166107809672891
- type: nauc_precision_at_10_std
value: -6.881018792063406
- type: nauc_precision_at_1_diff1
value: 33.44622804608476
- type: nauc_precision_at_1_max
value: -0.4275117298270247
- type: nauc_precision_at_1_std
value: -10.195443857766566
- type: nauc_precision_at_20_diff1
value: 24.609029763068264
- type: nauc_precision_at_20_max
value: 6.284294179586118
- type: nauc_precision_at_20_std
value: 1.9907354399675787
- type: nauc_precision_at_3_diff1
value: 30.032708101293274
- type: nauc_precision_at_3_max
value: 0.3723674743540282
- type: nauc_precision_at_3_std
value: -12.3372142678907
- type: nauc_precision_at_5_diff1
value: 28.660692204010886
- type: nauc_precision_at_5_max
value: 1.2268562134706205
- type: nauc_precision_at_5_std
value: -11.633329112055833
- type: nauc_recall_at_1000_diff1
value: 25.817560788852198
- type: nauc_recall_at_1000_max
value: 36.86227659755896
- type: nauc_recall_at_1000_std
value: 63.94303180569221
- type: nauc_recall_at_100_diff1
value: 27.1728992415636
- type: nauc_recall_at_100_max
value: 11.548635552653852
- type: nauc_recall_at_100_std
value: 31.974599323501035
- type: nauc_recall_at_10_diff1
value: 29.294882367129404
- type: nauc_recall_at_10_max
value: 3.763234551980347
- type: nauc_recall_at_10_std
value: -6.861891907951573
- type: nauc_recall_at_1_diff1
value: 33.742442076477175
- type: nauc_recall_at_1_max
value: -0.4682625514356794
- type: nauc_recall_at_1_std
value: -10.128931659708204
- type: nauc_recall_at_20_diff1
value: 28.258984645981585
- type: nauc_recall_at_20_max
value: 5.291852480348364
- type: nauc_recall_at_20_std
value: 2.243022569638662
- type: nauc_recall_at_3_diff1
value: 30.6436214049594
- type: nauc_recall_at_3_max
value: 0.7472464014018717
- type: nauc_recall_at_3_std
value: -11.831114059680123
- type: nauc_recall_at_5_diff1
value: 29.74258077127522
- type: nauc_recall_at_5_max
value: 1.0386095329679945
- type: nauc_recall_at_5_std
value: -11.487480890097885
- type: ndcg_at_1
value: 19.971
- type: ndcg_at_10
value: 37.701
- type: ndcg_at_100
value: 43.71
- type: ndcg_at_1000
value: 45.191
- type: ndcg_at_20
value: 40.421
- type: ndcg_at_3
value: 30.25
- type: ndcg_at_5
value: 33.816
- type: precision_at_1
value: 19.971
- type: precision_at_10
value: 6.037
- type: precision_at_100
value: 0.904
- type: precision_at_1000
value: 0.103
- type: precision_at_20
value: 3.58
- type: precision_at_3
value: 12.97
- type: precision_at_5
value: 9.59
- type: recall_at_1
value: 19.483
- type: recall_at_10
value: 57.972
- type: recall_at_100
value: 85.639
- type: recall_at_1000
value: 96.954
- type: recall_at_20
value: 68.456
- type: recall_at_3
value: 37.628
- type: recall_at_5
value: 46.169
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.86365709074327
- type: f1
value: 92.6364668121745
- type: f1_weighted
value: 92.86565911176659
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 64.8518011855905
- type: f1
value: 46.19074540696208
- type: f1_weighted
value: 67.21842739260879
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 68.86348352387357
- type: f1
value: 66.61591434283466
- type: f1_weighted
value: 67.89865692906163
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 75.24882313382649
- type: f1
value: 74.48847159335763
- type: f1_weighted
value: 74.9792940543356
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.112200560363956
- type: v_measures
value:
- 0.2907912540071734
- 0.30018368642906285
- 0.2969957025066252
- 0.3046412932920165
- 0.3023643866062867
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.10468325586837
- type: v_measures
value:
- 0.2762290794660957
- 0.27088303670750474
- 0.27869334502955523
- 0.2877927347410669
- 0.283558635107815
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: map
value: 31.65436388779704
- type: mrr
value: 32.70570836057599
- type: nAUC_map_diff1
value: 11.29046170729857
- type: nAUC_map_max
value: -21.303851901712402
- type: nAUC_mrr_diff1
value: 10.839304152084038
- type: nAUC_mrr_max
value: -15.980370460373264
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 5.349
- type: map_at_10
value: 11.943
- type: map_at_100
value: 14.838999999999999
- type: map_at_1000
value: 16.228
- type: map_at_20
value: 13.255
- type: map_at_3
value: 8.712
- type: map_at_5
value: 10.078
- type: mrr_at_1
value: 41.48606811145511
- type: mrr_at_10
value: 50.93174111749962
- type: mrr_at_100
value: 51.51424977382773
- type: mrr_at_1000
value: 51.54588577020572
- type: mrr_at_20
value: 51.23754793809378
- type: mrr_at_3
value: 48.14241486068111
- type: mrr_at_5
value: 50.123839009287906
- type: nauc_map_at_1000_diff1
value: 24.080224706969375
- type: nauc_map_at_1000_max
value: 19.837568780211047
- type: nauc_map_at_100_diff1
value: 24.547349720088924
- type: nauc_map_at_100_max
value: 18.76780474051508
- type: nauc_map_at_10_diff1
value: 27.36097408135702
- type: nauc_map_at_10_max
value: 12.918546361288225
- type: nauc_map_at_1_diff1
value: 37.536183652718385
- type: nauc_map_at_1_max
value: -1.1491546718720773
- type: nauc_map_at_20_diff1
value: 25.802748647118108
- type: nauc_map_at_20_max
value: 15.721455228191134
- type: nauc_map_at_3_diff1
value: 34.211955433524196
- type: nauc_map_at_3_max
value: 6.1567837982868
- type: nauc_map_at_5_diff1
value: 31.355144847293875
- type: nauc_map_at_5_max
value: 9.248778872692705
- type: nauc_mrr_at_1000_diff1
value: 32.6636800918941
- type: nauc_mrr_at_1000_max
value: 38.762875976765784
- type: nauc_mrr_at_100_diff1
value: 32.68119918270669
- type: nauc_mrr_at_100_max
value: 38.78855177547366
- type: nauc_mrr_at_10_diff1
value: 32.6341499099646
- type: nauc_mrr_at_10_max
value: 38.57304636650009
- type: nauc_mrr_at_1_diff1
value: 33.31813734214396
- type: nauc_mrr_at_1_max
value: 32.615374725091584
- type: nauc_mrr_at_20_diff1
value: 32.70066434653636
- type: nauc_mrr_at_20_max
value: 38.65649435307692
- type: nauc_mrr_at_3_diff1
value: 31.970185140740586
- type: nauc_mrr_at_3_max
value: 37.36826662425585
- type: nauc_mrr_at_5_diff1
value: 32.75800999798963
- type: nauc_mrr_at_5_max
value: 38.54932999541346
- type: nauc_ndcg_at_1000_diff1
value: 24.176402213150016
- type: nauc_ndcg_at_1000_max
value: 37.64114068923691
- type: nauc_ndcg_at_100_diff1
value: 24.70407050925309
- type: nauc_ndcg_at_100_max
value: 32.22741268616694
- type: nauc_ndcg_at_10_diff1
value: 22.754607233929026
- type: nauc_ndcg_at_10_max
value: 31.40000991648274
- type: nauc_ndcg_at_1_diff1
value: 35.00398764062468
- type: nauc_ndcg_at_1_max
value: 31.74496244016824
- type: nauc_ndcg_at_20_diff1
value: 22.663343998503983
- type: nauc_ndcg_at_20_max
value: 29.9805490570407
- type: nauc_ndcg_at_3_diff1
value: 28.436841714938453
- type: nauc_ndcg_at_3_max
value: 32.998271531027235
- type: nauc_ndcg_at_5_diff1
value: 25.421553164974465
- type: nauc_ndcg_at_5_max
value: 32.18602590821988
- type: nauc_precision_at_1000_diff1
value: -0.3756162839426823
- type: nauc_precision_at_1000_max
value: 10.910413856430598
- type: nauc_precision_at_100_diff1
value: 1.1336658089729603
- type: nauc_precision_at_100_max
value: 22.634879988362034
- type: nauc_precision_at_10_diff1
value: 6.74407085117157
- type: nauc_precision_at_10_max
value: 34.47976455090075
- type: nauc_precision_at_1_diff1
value: 33.31813734214396
- type: nauc_precision_at_1_max
value: 32.615374725091584
- type: nauc_precision_at_20_diff1
value: 4.689841488175409
- type: nauc_precision_at_20_max
value: 30.590963857775392
- type: nauc_precision_at_3_diff1
value: 20.55747518226839
- type: nauc_precision_at_3_max
value: 37.32360882162725
- type: nauc_precision_at_5_diff1
value: 13.519365761095049
- type: nauc_precision_at_5_max
value: 36.50427925753384
- type: nauc_recall_at_1000_diff1
value: 6.311137710079621
- type: nauc_recall_at_1000_max
value: 17.25930964555457
- type: nauc_recall_at_100_diff1
value: 12.070904435192633
- type: nauc_recall_at_100_max
value: 16.897787779896493
- type: nauc_recall_at_10_diff1
value: 18.611832015375775
- type: nauc_recall_at_10_max
value: 11.574222141528578
- type: nauc_recall_at_1_diff1
value: 37.536183652718385
- type: nauc_recall_at_1_max
value: -1.1491546718720773
- type: nauc_recall_at_20_diff1
value: 16.07623501237203
- type: nauc_recall_at_20_max
value: 13.067066121164272
- type: nauc_recall_at_3_diff1
value: 30.081195021905327
- type: nauc_recall_at_3_max
value: 5.842397752446413
- type: nauc_recall_at_5_diff1
value: 24.93661141138374
- type: nauc_recall_at_5_max
value: 9.101458243965707
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 32.586999999999996
- type: ndcg_at_100
value: 29.584
- type: ndcg_at_1000
value: 38.547
- type: ndcg_at_20
value: 30.375999999999998
- type: ndcg_at_3
value: 36.538
- type: ndcg_at_5
value: 35.221999999999994
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 24.334
- type: precision_at_100
value: 7.424
- type: precision_at_1000
value: 2.0340000000000003
- type: precision_at_20
value: 17.91
- type: precision_at_3
value: 34.056
- type: precision_at_5
value: 30.464000000000002
- type: recall_at_1
value: 5.349
- type: recall_at_10
value: 16.242
- type: recall_at_100
value: 29.945
- type: recall_at_1000
value: 62.05199999999999
- type: recall_at_20
value: 19.902
- type: recall_at_3
value: 9.876
- type: recall_at_5
value: 12.572
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 29.425
- type: map_at_10
value: 44.198
- type: map_at_100
value: 45.295
- type: map_at_1000
value: 45.33
- type: map_at_20
value: 44.936
- type: map_at_3
value: 39.875
- type: map_at_5
value: 42.436
- type: mrr_at_1
value: 33.25608342989571
- type: mrr_at_10
value: 46.87629899391188
- type: mrr_at_100
value: 47.690164701118185
- type: mrr_at_1000
value: 47.71428403625546
- type: mrr_at_20
value: 47.42833555993071
- type: mrr_at_3
value: 43.34685206643475
- type: mrr_at_5
value: 45.51950560061793
- type: nauc_map_at_1000_diff1
value: 27.312226360073222
- type: nauc_map_at_1000_max
value: 22.962888694992817
- type: nauc_map_at_100_diff1
value: 27.31236017939728
- type: nauc_map_at_100_max
value: 22.98214732521748
- type: nauc_map_at_10_diff1
value: 27.124754062247607
- type: nauc_map_at_10_max
value: 22.98523700527181
- type: nauc_map_at_1_diff1
value: 30.166695274339734
- type: nauc_map_at_1_max
value: 16.94040610050655
- type: nauc_map_at_20_diff1
value: 27.347142209622294
- type: nauc_map_at_20_max
value: 23.0670669691066
- type: nauc_map_at_3_diff1
value: 26.632306929470623
- type: nauc_map_at_3_max
value: 21.481691542717822
- type: nauc_map_at_5_diff1
value: 27.287804705337535
- type: nauc_map_at_5_max
value: 22.493844564235342
- type: nauc_mrr_at_1000_diff1
value: 27.30127737418056
- type: nauc_mrr_at_1000_max
value: 22.787859374079716
- type: nauc_mrr_at_100_diff1
value: 27.303876802541748
- type: nauc_mrr_at_100_max
value: 22.80721855197395
- type: nauc_mrr_at_10_diff1
value: 27.186611528500187
- type: nauc_mrr_at_10_max
value: 22.859824168434336
- type: nauc_mrr_at_1_diff1
value: 29.671840768485236
- type: nauc_mrr_at_1_max
value: 18.62641271291652
- type: nauc_mrr_at_20_diff1
value: 27.319675868007113
- type: nauc_mrr_at_20_max
value: 22.87274340612363
- type: nauc_mrr_at_3_diff1
value: 26.61707654253056
- type: nauc_mrr_at_3_max
value: 22.025508910524508
- type: nauc_mrr_at_5_diff1
value: 27.275599678159214
- type: nauc_mrr_at_5_max
value: 22.817841124325863
- type: nauc_ndcg_at_1000_diff1
value: 27.024135971460566
- type: nauc_ndcg_at_1000_max
value: 24.731731553474926
- type: nauc_ndcg_at_100_diff1
value: 27.04262454578955
- type: nauc_ndcg_at_100_max
value: 25.345992976621467
- type: nauc_ndcg_at_10_diff1
value: 26.52062330830679
- type: nauc_ndcg_at_10_max
value: 25.54987961260865
- type: nauc_ndcg_at_1_diff1
value: 29.75855826437188
- type: nauc_ndcg_at_1_max
value: 18.533988613886347
- type: nauc_ndcg_at_20_diff1
value: 27.179672428049358
- type: nauc_ndcg_at_20_max
value: 25.868852780159955
- type: nauc_ndcg_at_3_diff1
value: 25.73603612782681
- type: nauc_ndcg_at_3_max
value: 22.822406256607508
- type: nauc_ndcg_at_5_diff1
value: 26.866765656608166
- type: nauc_ndcg_at_5_max
value: 24.563327197364192
- type: nauc_precision_at_1000_diff1
value: -3.6671099515474546
- type: nauc_precision_at_1000_max
value: 8.392883425148419
- type: nauc_precision_at_100_diff1
value: 1.6665941155151767
- type: nauc_precision_at_100_max
value: 17.17133780287315
- type: nauc_precision_at_10_diff1
value: 12.52603911510069
- type: nauc_precision_at_10_max
value: 27.829277569200784
- type: nauc_precision_at_1_diff1
value: 29.75855826437188
- type: nauc_precision_at_1_max
value: 18.533988613886347
- type: nauc_precision_at_20_diff1
value: 10.394727205117283
- type: nauc_precision_at_20_max
value: 25.940765842937154
- type: nauc_precision_at_3_diff1
value: 18.89131093283048
- type: nauc_precision_at_3_max
value: 27.157524819932267
- type: nauc_precision_at_5_diff1
value: 18.540350280124553
- type: nauc_precision_at_5_max
value: 28.899931159479568
- type: nauc_recall_at_1000_diff1
value: 17.289718955680986
- type: nauc_recall_at_1000_max
value: 69.19421724583071
- type: nauc_recall_at_100_diff1
value: 24.72129337125488
- type: nauc_recall_at_100_max
value: 50.247836461213026
- type: nauc_recall_at_10_diff1
value: 23.094557038973154
- type: nauc_recall_at_10_max
value: 34.688707228391436
- type: nauc_recall_at_1_diff1
value: 30.166695274339734
- type: nauc_recall_at_1_max
value: 16.94040610050655
- type: nauc_recall_at_20_diff1
value: 26.066983320875796
- type: nauc_recall_at_20_max
value: 40.1137815499548
- type: nauc_recall_at_3_diff1
value: 22.431670311997458
- type: nauc_recall_at_3_max
value: 25.214444970878137
- type: nauc_recall_at_5_diff1
value: 24.77117131128259
- type: nauc_recall_at_5_max
value: 29.723938351180635
- type: ndcg_at_1
value: 33.227000000000004
- type: ndcg_at_10
value: 51.919000000000004
- type: ndcg_at_100
value: 56.484
- type: ndcg_at_1000
value: 57.282999999999994
- type: ndcg_at_20
value: 54.269999999999996
- type: ndcg_at_3
value: 43.828
- type: ndcg_at_5
value: 48.098
- type: precision_at_1
value: 33.227000000000004
- type: precision_at_10
value: 8.652999999999999
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.12
- type: precision_at_20
value: 4.89
- type: precision_at_3
value: 20.133000000000003
- type: precision_at_5
value: 14.513000000000002
- type: recall_at_1
value: 29.425
- type: recall_at_10
value: 72.82
- type: recall_at_100
value: 92.538
- type: recall_at_1000
value: 98.424
- type: recall_at_20
value: 81.477
- type: recall_at_3
value: 51.815
- type: recall_at_5
value: 61.667
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 69.82000000000001
- type: map_at_10
value: 83.673
- type: map_at_100
value: 84.344
- type: map_at_1000
value: 84.36099999999999
- type: map_at_20
value: 84.112
- type: map_at_3
value: 80.729
- type: map_at_5
value: 82.61500000000001
- type: mrr_at_1
value: 80.53
- type: mrr_at_10
value: 86.92019047619019
- type: mrr_at_100
value: 87.04157853007743
- type: mrr_at_1000
value: 87.04266956945463
- type: mrr_at_20
value: 87.01226011501363
- type: mrr_at_3
value: 85.9499999999997
- type: mrr_at_5
value: 86.63349999999961
- type: nauc_map_at_1000_diff1
value: 75.65678265654154
- type: nauc_map_at_1000_max
value: 31.033445730504262
- type: nauc_map_at_100_diff1
value: 75.665676378331
- type: nauc_map_at_100_max
value: 31.01031193403524
- type: nauc_map_at_10_diff1
value: 75.92155211732666
- type: nauc_map_at_10_max
value: 30.699553005920876
- type: nauc_map_at_1_diff1
value: 78.81016014060471
- type: nauc_map_at_1_max
value: 22.814760471331834
- type: nauc_map_at_20_diff1
value: 75.74929629006137
- type: nauc_map_at_20_max
value: 30.893554874139433
- type: nauc_map_at_3_diff1
value: 76.1764115683232
- type: nauc_map_at_3_max
value: 28.16660891891235
- type: nauc_map_at_5_diff1
value: 76.09603035768426
- type: nauc_map_at_5_max
value: 29.56936639444799
- type: nauc_mrr_at_1000_diff1
value: 75.78580796966573
- type: nauc_mrr_at_1000_max
value: 34.02109834360725
- type: nauc_mrr_at_100_diff1
value: 75.78595527403508
- type: nauc_mrr_at_100_max
value: 34.021087682823335
- type: nauc_mrr_at_10_diff1
value: 75.7979018845172
- type: nauc_mrr_at_10_max
value: 34.135531496766916
- type: nauc_mrr_at_1_diff1
value: 76.43928247510952
- type: nauc_mrr_at_1_max
value: 33.337959508505584
- type: nauc_mrr_at_20_diff1
value: 75.79212450365954
- type: nauc_mrr_at_20_max
value: 34.06210079146971
- type: nauc_mrr_at_3_diff1
value: 75.5255736362254
- type: nauc_mrr_at_3_max
value: 33.89721705659265
- type: nauc_mrr_at_5_diff1
value: 75.68937847088145
- type: nauc_mrr_at_5_max
value: 34.02467565216077
- type: nauc_ndcg_at_1000_diff1
value: 75.4126317877968
- type: nauc_ndcg_at_1000_max
value: 32.699594674546205
- type: nauc_ndcg_at_100_diff1
value: 75.47759102907804
- type: nauc_ndcg_at_100_max
value: 32.66261940818275
- type: nauc_ndcg_at_10_diff1
value: 75.73588489669638
- type: nauc_ndcg_at_10_max
value: 32.33245124470892
- type: nauc_ndcg_at_1_diff1
value: 76.45767684895692
- type: nauc_ndcg_at_1_max
value: 33.20840934731383
- type: nauc_ndcg_at_20_diff1
value: 75.58603380019049
- type: nauc_ndcg_at_20_max
value: 32.38108118316705
- type: nauc_ndcg_at_3_diff1
value: 74.93057981530794
- type: nauc_ndcg_at_3_max
value: 30.454945230766768
- type: nauc_ndcg_at_5_diff1
value: 75.51622080687214
- type: nauc_ndcg_at_5_max
value: 31.13850590682269
- type: nauc_precision_at_1000_diff1
value: -43.024967717775205
- type: nauc_precision_at_1000_max
value: -3.1998109745927548
- type: nauc_precision_at_100_diff1
value: -42.515126578347335
- type: nauc_precision_at_100_max
value: -2.9690115801151893
- type: nauc_precision_at_10_diff1
value: -35.9659880370386
- type: nauc_precision_at_10_max
value: 1.9839275718355172
- type: nauc_precision_at_1_diff1
value: 76.45767684895692
- type: nauc_precision_at_1_max
value: 33.20840934731383
- type: nauc_precision_at_20_diff1
value: -39.88046573159446
- type: nauc_precision_at_20_max
value: -0.9048730965169257
- type: nauc_precision_at_3_diff1
value: -14.92778542739045
- type: nauc_precision_at_3_max
value: 9.506709583490675
- type: nauc_precision_at_5_diff1
value: -28.135454012537338
- type: nauc_precision_at_5_max
value: 4.920384611100925
- type: nauc_recall_at_1000_diff1
value: 50.96814010736587
- type: nauc_recall_at_1000_max
value: 35.178726454686455
- type: nauc_recall_at_100_diff1
value: 73.73669235424917
- type: nauc_recall_at_100_max
value: 37.233185728668374
- type: nauc_recall_at_10_diff1
value: 72.24423264106863
- type: nauc_recall_at_10_max
value: 31.982200960724956
- type: nauc_recall_at_1_diff1
value: 78.81016014060471
- type: nauc_recall_at_1_max
value: 22.814760471331834
- type: nauc_recall_at_20_diff1
value: 71.93499290798466
- type: nauc_recall_at_20_max
value: 32.59815791665593
- type: nauc_recall_at_3_diff1
value: 72.3320862681771
- type: nauc_recall_at_3_max
value: 25.07376413220408
- type: nauc_recall_at_5_diff1
value: 71.67188030305823
- type: nauc_recall_at_5_max
value: 26.80994890402375
- type: ndcg_at_1
value: 80.52
- type: ndcg_at_10
value: 87.561
- type: ndcg_at_100
value: 88.90299999999999
- type: ndcg_at_1000
value: 89.01400000000001
- type: ndcg_at_20
value: 88.293
- type: ndcg_at_3
value: 84.682
- type: ndcg_at_5
value: 86.319
- type: precision_at_1
value: 80.52
- type: precision_at_10
value: 13.288
- type: precision_at_100
value: 1.529
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.077
- type: precision_at_3
value: 37.047000000000004
- type: precision_at_5
value: 24.398
- type: recall_at_1
value: 69.82000000000001
- type: recall_at_10
value: 94.826
- type: recall_at_100
value: 99.451
- type: recall_at_1000
value: 99.97399999999999
- type: recall_at_20
value: 97.18900000000001
- type: recall_at_3
value: 86.514
- type: recall_at_5
value: 91.142
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 51.343294599772406
- type: v_measures
value:
- 0.524406904623621
- 0.6101985612354274
- 0.4897743980881878
- 0.4778925279162627
- 0.5135415040552358
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 62.5095092123788
- type: v_measures
value:
- 0.6648234836097784
- 0.6647202577739932
- 0.6578622320980865
- 0.3861971752849744
- 0.702695555977878
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 4.077999999999999
- type: map_at_10
value: 9.778
- type: map_at_100
value: 11.82
- type: map_at_1000
value: 12.115
- type: map_at_20
value: 10.781
- type: map_at_3
value: 6.973999999999999
- type: map_at_5
value: 8.305
- type: mrr_at_1
value: 20.0
- type: mrr_at_10
value: 29.48242063492062
- type: mrr_at_100
value: 30.73048798111141
- type: mrr_at_1000
value: 30.80176607280717
- type: mrr_at_20
value: 30.25008647742002
- type: mrr_at_3
value: 26.200000000000035
- type: mrr_at_5
value: 28.159999999999997
- type: nauc_map_at_1000_diff1
value: 11.636747601961716
- type: nauc_map_at_1000_max
value: 26.56203998243688
- type: nauc_map_at_100_diff1
value: 11.563351860766852
- type: nauc_map_at_100_max
value: 26.424696755662403
- type: nauc_map_at_10_diff1
value: 11.60703222490682
- type: nauc_map_at_10_max
value: 25.186183431711402
- type: nauc_map_at_1_diff1
value: 17.854436578277337
- type: nauc_map_at_1_max
value: 18.26224581093851
- type: nauc_map_at_20_diff1
value: 12.180332838724942
- type: nauc_map_at_20_max
value: 25.8033910164551
- type: nauc_map_at_3_diff1
value: 12.934911478090966
- type: nauc_map_at_3_max
value: 23.735462709754234
- type: nauc_map_at_5_diff1
value: 12.555897350050973
- type: nauc_map_at_5_max
value: 24.934261010188717
- type: nauc_mrr_at_1000_diff1
value: 15.322270869482562
- type: nauc_mrr_at_1000_max
value: 21.176483776434594
- type: nauc_mrr_at_100_diff1
value: 15.313592100314633
- type: nauc_mrr_at_100_max
value: 21.189701285575325
- type: nauc_mrr_at_10_diff1
value: 15.04116376326651
- type: nauc_mrr_at_10_max
value: 20.925070769290475
- type: nauc_mrr_at_1_diff1
value: 17.964804649660955
- type: nauc_mrr_at_1_max
value: 18.691637068130458
- type: nauc_mrr_at_20_diff1
value: 15.2990030423077
- type: nauc_mrr_at_20_max
value: 21.217369625839176
- type: nauc_mrr_at_3_diff1
value: 15.528981019166746
- type: nauc_mrr_at_3_max
value: 20.603019869260343
- type: nauc_mrr_at_5_diff1
value: 15.466774363671028
- type: nauc_mrr_at_5_max
value: 20.975652450389344
- type: nauc_ndcg_at_1000_diff1
value: 12.038972353500306
- type: nauc_ndcg_at_1000_max
value: 27.903890926586932
- type: nauc_ndcg_at_100_diff1
value: 11.319931937239677
- type: nauc_ndcg_at_100_max
value: 27.027157611973795
- type: nauc_ndcg_at_10_diff1
value: 11.078446709122273
- type: nauc_ndcg_at_10_max
value: 23.822320107334757
- type: nauc_ndcg_at_1_diff1
value: 17.964804649660955
- type: nauc_ndcg_at_1_max
value: 18.691637068130458
- type: nauc_ndcg_at_20_diff1
value: 12.181596567416559
- type: nauc_ndcg_at_20_max
value: 25.37079586511582
- type: nauc_ndcg_at_3_diff1
value: 13.160188878029732
- type: nauc_ndcg_at_3_max
value: 23.08138307792905
- type: nauc_ndcg_at_5_diff1
value: 12.469734888035655
- type: nauc_ndcg_at_5_max
value: 23.90995773496041
- type: nauc_precision_at_1000_diff1
value: 6.917224255812411
- type: nauc_precision_at_1000_max
value: 25.68380359473843
- type: nauc_precision_at_100_diff1
value: 6.398565019403672
- type: nauc_precision_at_100_max
value: 25.768394101162112
- type: nauc_precision_at_10_diff1
value: 6.984700975104491
- type: nauc_precision_at_10_max
value: 22.95876859709157
- type: nauc_precision_at_1_diff1
value: 17.964804649660955
- type: nauc_precision_at_1_max
value: 18.691637068130458
- type: nauc_precision_at_20_diff1
value: 9.297910278094806
- type: nauc_precision_at_20_max
value: 25.047556185054287
- type: nauc_precision_at_3_diff1
value: 11.002794885922452
- type: nauc_precision_at_3_max
value: 24.196674867665706
- type: nauc_precision_at_5_diff1
value: 10.022107536962826
- type: nauc_precision_at_5_max
value: 25.168302306099687
- type: nauc_recall_at_1000_diff1
value: 7.394647014691793
- type: nauc_recall_at_1000_max
value: 25.484913499786305
- type: nauc_recall_at_100_diff1
value: 6.725259371546906
- type: nauc_recall_at_100_max
value: 25.516933161468554
- type: nauc_recall_at_10_diff1
value: 7.129477895634021
- type: nauc_recall_at_10_max
value: 22.349457271477668
- type: nauc_recall_at_1_diff1
value: 17.854436578277337
- type: nauc_recall_at_1_max
value: 18.26224581093851
- type: nauc_recall_at_20_diff1
value: 9.42187360268394
- type: nauc_recall_at_20_max
value: 24.71382966903428
- type: nauc_recall_at_3_diff1
value: 10.921292270138911
- type: nauc_recall_at_3_max
value: 23.694131354239822
- type: nauc_recall_at_5_diff1
value: 9.99247973152595
- type: nauc_recall_at_5_max
value: 24.651761102821084
- type: ndcg_at_1
value: 20.0
- type: ndcg_at_10
value: 16.927
- type: ndcg_at_100
value: 25.258999999999997
- type: ndcg_at_1000
value: 30.653999999999996
- type: ndcg_at_20
value: 19.764
- type: ndcg_at_3
value: 15.744
- type: ndcg_at_5
value: 13.925
- type: precision_at_1
value: 20.0
- type: precision_at_10
value: 8.83
- type: precision_at_100
value: 2.095
- type: precision_at_1000
value: 0.33899999999999997
- type: precision_at_20
value: 6.0600000000000005
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 12.24
- type: recall_at_1
value: 4.077999999999999
- type: recall_at_10
value: 17.928
- type: recall_at_100
value: 42.518
- type: recall_at_1000
value: 68.845
- type: recall_at_20
value: 24.6
- type: recall_at_3
value: 8.898
- type: recall_at_5
value: 12.413
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 85.7559383087264
- type: cos_sim_spearman
value: 79.49310740723678
- type: euclidean_pearson
value: 83.65824145595195
- type: euclidean_spearman
value: 79.49312535113798
- type: manhattan_pearson
value: 83.66988365842141
- type: manhattan_spearman
value: 79.47470834406325
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.5839985534636
- type: cos_sim_spearman
value: 79.59450354665624
- type: euclidean_pearson
value: 84.43014644677757
- type: euclidean_spearman
value: 79.59390361649422
- type: manhattan_pearson
value: 84.29576591448196
- type: manhattan_spearman
value: 79.41063925463973
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.29076815493453
- type: cos_sim_spearman
value: 84.70010174539925
- type: euclidean_pearson
value: 84.61503525685455
- type: euclidean_spearman
value: 84.70010174539925
- type: manhattan_pearson
value: 84.49870751490295
- type: manhattan_spearman
value: 84.56101687530112
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.9713152765247
- type: cos_sim_spearman
value: 83.91546630697559
- type: euclidean_pearson
value: 85.24770099001337
- type: euclidean_spearman
value: 83.915468446163
- type: manhattan_pearson
value: 85.10357473564157
- type: manhattan_spearman
value: 83.73948507923797
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.93457583410698
- type: cos_sim_spearman
value: 89.87899103812357
- type: euclidean_pearson
value: 89.3577529833881
- type: euclidean_spearman
value: 89.87899029911844
- type: manhattan_pearson
value: 89.27376081191959
- type: manhattan_spearman
value: 89.7855896343813
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.07870989011116
- type: cos_sim_spearman
value: 84.87521134248455
- type: euclidean_pearson
value: 84.64895196926211
- type: euclidean_spearman
value: 84.87521088458564
- type: manhattan_pearson
value: 84.5177988181249
- type: manhattan_spearman
value: 84.75674790631112
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cos_sim_pearson
value: 88.2908976539434
- type: cos_sim_spearman
value: 88.69212374652548
- type: euclidean_pearson
value: 89.05936753064138
- type: euclidean_spearman
value: 88.69212374652548
- type: manhattan_pearson
value: 89.01731329350842
- type: manhattan_spearman
value: 88.59540957427528
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cos_sim_pearson
value: 64.635158432127
- type: cos_sim_spearman
value: 64.27073239884248
- type: euclidean_pearson
value: 66.21751368412394
- type: euclidean_spearman
value: 64.27073239884248
- type: manhattan_pearson
value: 66.15088837552513
- type: manhattan_spearman
value: 64.15068735594215
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.42604775914748
- type: cos_sim_spearman
value: 87.44031003802738
- type: euclidean_pearson
value: 87.43316615586107
- type: euclidean_spearman
value: 87.44031555024793
- type: manhattan_pearson
value: 87.31365950205998
- type: manhattan_spearman
value: 87.2941204713115
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 81.48265276238037
- type: mrr
value: 95.01133216819491
- type: nAUC_map_diff1
value: 7.09429277125843
- type: nAUC_map_max
value: 49.94029950841183
- type: nAUC_mrr_diff1
value: 50.34339361427615
- type: nAUC_mrr_max
value: 79.45634760563627
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 54.678000000000004
- type: map_at_10
value: 63.80200000000001
- type: map_at_100
value: 64.41
- type: map_at_1000
value: 64.444
- type: map_at_20
value: 64.164
- type: map_at_3
value: 60.925
- type: map_at_5
value: 62.572
- type: mrr_at_1
value: 57.333333333333336
- type: mrr_at_10
value: 65.36190476190477
- type: mrr_at_100
value: 65.81385754624793
- type: mrr_at_1000
value: 65.84158115043161
- type: mrr_at_20
value: 65.65317979234389
- type: mrr_at_3
value: 63.0
- type: mrr_at_5
value: 64.41666666666667
- type: nauc_map_at_1000_diff1
value: 69.67809035584142
- type: nauc_map_at_1000_max
value: 48.96413821482873
- type: nauc_map_at_100_diff1
value: 69.65364392825721
- type: nauc_map_at_100_max
value: 48.94627564310418
- type: nauc_map_at_10_diff1
value: 69.57800581171497
- type: nauc_map_at_10_max
value: 48.67897603770599
- type: nauc_map_at_1_diff1
value: 73.01790551734226
- type: nauc_map_at_1_max
value: 45.104875261851916
- type: nauc_map_at_20_diff1
value: 69.61362614678308
- type: nauc_map_at_20_max
value: 48.84035952302365
- type: nauc_map_at_3_diff1
value: 70.38590108900951
- type: nauc_map_at_3_max
value: 46.43843764953461
- type: nauc_map_at_5_diff1
value: 70.52260971372544
- type: nauc_map_at_5_max
value: 48.476797316953416
- type: nauc_mrr_at_1000_diff1
value: 69.5164716438929
- type: nauc_mrr_at_1000_max
value: 51.01945033286556
- type: nauc_mrr_at_100_diff1
value: 69.49106498529696
- type: nauc_mrr_at_100_max
value: 50.99948856957515
- type: nauc_mrr_at_10_diff1
value: 69.36739032618048
- type: nauc_mrr_at_10_max
value: 51.12574202596153
- type: nauc_mrr_at_1_diff1
value: 72.36743551825465
- type: nauc_mrr_at_1_max
value: 47.803395494440075
- type: nauc_mrr_at_20_diff1
value: 69.45736580905654
- type: nauc_mrr_at_20_max
value: 51.125870226973
- type: nauc_mrr_at_3_diff1
value: 70.18553354726865
- type: nauc_mrr_at_3_max
value: 50.0088700597719
- type: nauc_mrr_at_5_diff1
value: 70.13590171518094
- type: nauc_mrr_at_5_max
value: 51.43303825871863
- type: nauc_ndcg_at_1000_diff1
value: 68.59647494694465
- type: nauc_ndcg_at_1000_max
value: 50.84423309342368
- type: nauc_ndcg_at_100_diff1
value: 67.82685442401551
- type: nauc_ndcg_at_100_max
value: 50.69511102823468
- type: nauc_ndcg_at_10_diff1
value: 67.42699840483867
- type: nauc_ndcg_at_10_max
value: 50.367163593062216
- type: nauc_ndcg_at_1_diff1
value: 72.36743551825465
- type: nauc_ndcg_at_1_max
value: 47.803395494440075
- type: nauc_ndcg_at_20_diff1
value: 67.52338557685287
- type: nauc_ndcg_at_20_max
value: 50.79192152642611
- type: nauc_ndcg_at_3_diff1
value: 69.49850475650591
- type: nauc_ndcg_at_3_max
value: 47.69895483323495
- type: nauc_ndcg_at_5_diff1
value: 69.63182188059407
- type: nauc_ndcg_at_5_max
value: 50.692751514480086
- type: nauc_precision_at_1000_diff1
value: -22.076090806418165
- type: nauc_precision_at_1000_max
value: 15.457406924757958
- type: nauc_precision_at_100_diff1
value: -8.958717111709591
- type: nauc_precision_at_100_max
value: 23.39568973722963
- type: nauc_precision_at_10_diff1
value: 18.145183858510112
- type: nauc_precision_at_10_max
value: 38.39226201682134
- type: nauc_precision_at_1_diff1
value: 72.36743551825465
- type: nauc_precision_at_1_max
value: 47.803395494440075
- type: nauc_precision_at_20_diff1
value: 8.777082697589234
- type: nauc_precision_at_20_max
value: 35.78403592135664
- type: nauc_precision_at_3_diff1
value: 51.376349362119726
- type: nauc_precision_at_3_max
value: 42.810598626104
- type: nauc_precision_at_5_diff1
value: 40.9296274632404
- type: nauc_precision_at_5_max
value: 45.61709594788111
- type: nauc_recall_at_1000_diff1
value: 77.7077497665738
- type: nauc_recall_at_1000_max
value: 100.0
- type: nauc_recall_at_100_diff1
value: 46.35143397460506
- type: nauc_recall_at_100_max
value: 53.648092062544094
- type: nauc_recall_at_10_diff1
value: 55.95034026811939
- type: nauc_recall_at_10_max
value: 52.01290145176738
- type: nauc_recall_at_1_diff1
value: 73.01790551734226
- type: nauc_recall_at_1_max
value: 45.104875261851916
- type: nauc_recall_at_20_diff1
value: 53.20691212906598
- type: nauc_recall_at_20_max
value: 54.72055923613812
- type: nauc_recall_at_3_diff1
value: 66.24729517787732
- type: nauc_recall_at_3_max
value: 46.4286391286208
- type: nauc_recall_at_5_diff1
value: 65.39243758829612
- type: nauc_recall_at_5_max
value: 54.04465496293424
- type: ndcg_at_1
value: 57.333
- type: ndcg_at_10
value: 68.471
- type: ndcg_at_100
value: 71.16900000000001
- type: ndcg_at_1000
value: 71.934
- type: ndcg_at_20
value: 69.706
- type: ndcg_at_3
value: 63.404
- type: ndcg_at_5
value: 65.93900000000001
- type: precision_at_1
value: 57.333
- type: precision_at_10
value: 9.133
- type: precision_at_100
value: 1.06
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 4.833
- type: precision_at_3
value: 24.444
- type: precision_at_5
value: 16.267
- type: recall_at_1
value: 54.678000000000004
- type: recall_at_10
value: 81.244
- type: recall_at_100
value: 93.43299999999999
- type: recall_at_1000
value: 99.333
- type: recall_at_20
value: 86.02199999999999
- type: recall_at_3
value: 67.878
- type: recall_at_5
value: 74.06700000000001
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79702970297029
- type: cos_sim_ap
value: 94.75976312657727
- type: cos_sim_f1
value: 89.7029702970297
- type: cos_sim_precision
value: 88.8235294117647
- type: cos_sim_recall
value: 90.60000000000001
- type: dot_accuracy
value: 99.79702970297029
- type: dot_ap
value: 94.75976312657727
- type: dot_f1
value: 89.7029702970297
- type: dot_precision
value: 88.8235294117647
- type: dot_recall
value: 90.60000000000001
- type: euclidean_accuracy
value: 99.79702970297029
- type: euclidean_ap
value: 94.75976312657727
- type: euclidean_f1
value: 89.7029702970297
- type: euclidean_precision
value: 88.8235294117647
- type: euclidean_recall
value: 90.60000000000001
- type: manhattan_accuracy
value: 99.79207920792079
- type: manhattan_ap
value: 94.55264782714548
- type: manhattan_f1
value: 89.30753564154786
- type: manhattan_precision
value: 90.97510373443983
- type: manhattan_recall
value: 87.7
- type: max_accuracy
value: 99.79702970297029
- type: max_ap
value: 94.75976312657727
- type: max_f1
value: 89.7029702970297
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 61.56750444120404
- type: v_measures
value:
- 0.6143066231410037
- 0.6484543949297663
- 0.5142449933097217
- 0.6062244515066045
- 0.5739832317328382
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.59242218010789
- type: v_measures
value:
- 0.32392220242444014
- 0.32718501590648996
- 0.32976751674384125
- 0.3203167039651582
- 0.32297852686667516
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.200337844678295
- type: mrr
value: 49.95621923011629
- type: nAUC_map_diff1
value: 35.98209315270176
- type: nAUC_map_max
value: 12.780142559584018
- type: nAUC_mrr_diff1
value: 36.332794804706545
- type: nAUC_mrr_max
value: 13.650142919994185
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.273860160997156
- type: cos_sim_spearman
value: 30.256535662300372
- type: dot_pearson
value: 30.273861391746525
- type: dot_spearman
value: 30.25651496178948
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.161
- type: map_at_10
value: 1.0410000000000001
- type: map_at_100
value: 5.558
- type: map_at_1000
value: 15.296000000000001
- type: map_at_20
value: 1.755
- type: map_at_3
value: 0.437
- type: map_at_5
value: 0.628
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 73.77222222222223
- type: mrr_at_100
value: 74.10160298638559
- type: mrr_at_1000
value: 74.10160298638559
- type: mrr_at_20
value: 73.9540404040404
- type: mrr_at_3
value: 70.0
- type: mrr_at_5
value: 73.3
- type: nauc_map_at_1000_diff1
value: -1.9604698462344767
- type: nauc_map_at_1000_max
value: 53.358985327864715
- type: nauc_map_at_100_diff1
value: -0.5161702872199433
- type: nauc_map_at_100_max
value: 46.53235662200936
- type: nauc_map_at_10_diff1
value: 3.9256229008973924
- type: nauc_map_at_10_max
value: 26.82251128561187
- type: nauc_map_at_1_diff1
value: 4.127739790921559
- type: nauc_map_at_1_max
value: 10.596981259216367
- type: nauc_map_at_20_diff1
value: 1.6849110098963962
- type: nauc_map_at_20_max
value: 32.645496648968034
- type: nauc_map_at_3_diff1
value: 6.0165331028800715
- type: nauc_map_at_3_max
value: 16.690807762030964
- type: nauc_map_at_5_diff1
value: 3.151304647408004
- type: nauc_map_at_5_max
value: 22.316738900184266
- type: nauc_mrr_at_1000_diff1
value: 8.665368838521031
- type: nauc_mrr_at_1000_max
value: 48.23582325840371
- type: nauc_mrr_at_100_diff1
value: 8.665368838521031
- type: nauc_mrr_at_100_max
value: 48.23582325840371
- type: nauc_mrr_at_10_diff1
value: 9.331739591001785
- type: nauc_mrr_at_10_max
value: 48.89453679791346
- type: nauc_mrr_at_1_diff1
value: 0.5428534934218375
- type: nauc_mrr_at_1_max
value: 35.480265678886184
- type: nauc_mrr_at_20_diff1
value: 8.583612874582078
- type: nauc_mrr_at_20_max
value: 48.472852713493815
- type: nauc_mrr_at_3_diff1
value: 9.854859452507785
- type: nauc_mrr_at_3_max
value: 50.846959397391124
- type: nauc_mrr_at_5_diff1
value: 9.223998156393836
- type: nauc_mrr_at_5_max
value: 49.03025489502146
- type: nauc_ndcg_at_1000_diff1
value: 1.8220536687131508
- type: nauc_ndcg_at_1000_max
value: 49.93338657126098
- type: nauc_ndcg_at_100_diff1
value: 3.0321366749721466
- type: nauc_ndcg_at_100_max
value: 51.5400551444691
- type: nauc_ndcg_at_10_diff1
value: 8.579231691222738
- type: nauc_ndcg_at_10_max
value: 46.76810123148509
- type: nauc_ndcg_at_1_diff1
value: -3.008490760055262
- type: nauc_ndcg_at_1_max
value: 26.51379381261564
- type: nauc_ndcg_at_20_diff1
value: 7.688195422853383
- type: nauc_ndcg_at_20_max
value: 49.01894979259541
- type: nauc_ndcg_at_3_diff1
value: 4.482353844424605
- type: nauc_ndcg_at_3_max
value: 41.5568352350729
- type: nauc_ndcg_at_5_diff1
value: 5.0965815841199005
- type: nauc_ndcg_at_5_max
value: 48.0173572974474
- type: nauc_precision_at_1000_diff1
value: -1.522279672648178
- type: nauc_precision_at_1000_max
value: 48.14464502949045
- type: nauc_precision_at_100_diff1
value: 0.3803220198283984
- type: nauc_precision_at_100_max
value: 54.606530270656286
- type: nauc_precision_at_10_diff1
value: 10.17060237919451
- type: nauc_precision_at_10_max
value: 51.07174301856844
- type: nauc_precision_at_1_diff1
value: 0.5428534934218375
- type: nauc_precision_at_1_max
value: 35.480265678886184
- type: nauc_precision_at_20_diff1
value: 5.7555901777882825
- type: nauc_precision_at_20_max
value: 51.63129458224357
- type: nauc_precision_at_3_diff1
value: 6.697227309781205
- type: nauc_precision_at_3_max
value: 48.75128365287913
- type: nauc_precision_at_5_diff1
value: 3.675984060933098
- type: nauc_precision_at_5_max
value: 54.034024908467984
- type: nauc_recall_at_1000_diff1
value: 4.99265973475678
- type: nauc_recall_at_1000_max
value: 47.39125173916412
- type: nauc_recall_at_100_diff1
value: 0.6537761781214193
- type: nauc_recall_at_100_max
value: 37.699187570549284
- type: nauc_recall_at_10_diff1
value: 8.17628273209583
- type: nauc_recall_at_10_max
value: 20.08017524188776
- type: nauc_recall_at_1_diff1
value: 4.127739790921559
- type: nauc_recall_at_1_max
value: 10.596981259216367
- type: nauc_recall_at_20_diff1
value: 4.210774786757633
- type: nauc_recall_at_20_max
value: 24.62795703117278
- type: nauc_recall_at_3_diff1
value: 8.110601816749067
- type: nauc_recall_at_3_max
value: 15.51743399119835
- type: nauc_recall_at_5_diff1
value: 5.312647840696286
- type: nauc_recall_at_5_max
value: 18.88262264074873
- type: ndcg_at_1
value: 57.99999999999999
- type: ndcg_at_10
value: 49.443
- type: ndcg_at_100
value: 37.479
- type: ndcg_at_1000
value: 38.735
- type: ndcg_at_20
value: 46.115
- type: ndcg_at_3
value: 54.458
- type: ndcg_at_5
value: 53.601
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 52.2
- type: precision_at_100
value: 38.64
- type: precision_at_1000
value: 17.98
- type: precision_at_20
value: 48.5
- type: precision_at_3
value: 59.333000000000006
- type: precision_at_5
value: 57.99999999999999
- type: recall_at_1
value: 0.161
- type: recall_at_10
value: 1.31
- type: recall_at_100
value: 9.105
- type: recall_at_1000
value: 38.330999999999996
- type: recall_at_20
value: 2.382
- type: recall_at_3
value: 0.47400000000000003
- type: recall_at_5
value: 0.735
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 1.773
- type: map_at_10
value: 5.819
- type: map_at_100
value: 9.791
- type: map_at_1000
value: 11.152
- type: map_at_20
value: 7.800999999999999
- type: map_at_3
value: 3.778
- type: map_at_5
value: 4.5760000000000005
- type: mrr_at_1
value: 26.53061224489796
- type: mrr_at_10
value: 36.790573372206026
- type: mrr_at_100
value: 38.9989669597618
- type: mrr_at_1000
value: 38.9989669597618
- type: mrr_at_20
value: 38.626122355982716
- type: mrr_at_3
value: 32.6530612244898
- type: mrr_at_5
value: 35.61224489795919
- type: nauc_map_at_1000_diff1
value: 5.982723549423614
- type: nauc_map_at_1000_max
value: -38.778150862184184
- type: nauc_map_at_100_diff1
value: 8.35542066448559
- type: nauc_map_at_100_max
value: -36.95988939092301
- type: nauc_map_at_10_diff1
value: 16.727112997939685
- type: nauc_map_at_10_max
value: -34.864329426120634
- type: nauc_map_at_1_diff1
value: 15.68698509866931
- type: nauc_map_at_1_max
value: -38.49856837383727
- type: nauc_map_at_20_diff1
value: 12.131431984811561
- type: nauc_map_at_20_max
value: -35.58027185761546
- type: nauc_map_at_3_diff1
value: 26.51714323506847
- type: nauc_map_at_3_max
value: -42.90178838356341
- type: nauc_map_at_5_diff1
value: 19.97087359992273
- type: nauc_map_at_5_max
value: -39.34647766735783
- type: nauc_mrr_at_1000_diff1
value: 1.9748555750391668
- type: nauc_mrr_at_1000_max
value: -38.23327494222865
- type: nauc_mrr_at_100_diff1
value: 1.9748555750391668
- type: nauc_mrr_at_100_max
value: -38.23327494222865
- type: nauc_mrr_at_10_diff1
value: 3.026383371657329
- type: nauc_mrr_at_10_max
value: -37.357911445290675
- type: nauc_mrr_at_1_diff1
value: -3.6234515245372143
- type: nauc_mrr_at_1_max
value: -34.918413083959635
- type: nauc_mrr_at_20_diff1
value: 2.785643581147567
- type: nauc_mrr_at_20_max
value: -39.027829254611596
- type: nauc_mrr_at_3_diff1
value: -0.652257776491799
- type: nauc_mrr_at_3_max
value: -35.777046370826746
- type: nauc_mrr_at_5_diff1
value: -1.0723788161779884
- type: nauc_mrr_at_5_max
value: -37.74663638254447
- type: nauc_ndcg_at_1000_diff1
value: -5.603421065562765
- type: nauc_ndcg_at_1000_max
value: -48.302585883872254
- type: nauc_ndcg_at_100_diff1
value: -0.4142249503459385
- type: nauc_ndcg_at_100_max
value: -44.266171155232605
- type: nauc_ndcg_at_10_diff1
value: 9.09113299295375
- type: nauc_ndcg_at_10_max
value: -33.95506408586246
- type: nauc_ndcg_at_1_diff1
value: -2.3509910942232137
- type: nauc_ndcg_at_1_max
value: -31.9181129505804
- type: nauc_ndcg_at_20_diff1
value: 6.711479298611486
- type: nauc_ndcg_at_20_max
value: -41.17797709135335
- type: nauc_ndcg_at_3_diff1
value: 7.181833876703895
- type: nauc_ndcg_at_3_max
value: -36.76673572406525
- type: nauc_ndcg_at_5_diff1
value: 4.220139919243461
- type: nauc_ndcg_at_5_max
value: -36.01626374596527
- type: nauc_precision_at_1000_diff1
value: -21.500096998480743
- type: nauc_precision_at_1000_max
value: 17.423538290188787
- type: nauc_precision_at_100_diff1
value: -20.576518815311264
- type: nauc_precision_at_100_max
value: -33.43443540744943
- type: nauc_precision_at_10_diff1
value: 5.916347999274714
- type: nauc_precision_at_10_max
value: -31.375195302655644
- type: nauc_precision_at_1_diff1
value: -3.6234515245372143
- type: nauc_precision_at_1_max
value: -34.918413083959635
- type: nauc_precision_at_20_diff1
value: -5.5440042532199145
- type: nauc_precision_at_20_max
value: -41.606107555682485
- type: nauc_precision_at_3_diff1
value: 9.493481238903758
- type: nauc_precision_at_3_max
value: -39.77246958608447
- type: nauc_precision_at_5_diff1
value: 0.9616292047999846
- type: nauc_precision_at_5_max
value: -36.360179658412726
- type: nauc_recall_at_1000_diff1
value: -18.950139772725883
- type: nauc_recall_at_1000_max
value: -49.50001953592577
- type: nauc_recall_at_100_diff1
value: -4.101410227998355
- type: nauc_recall_at_100_max
value: -43.41533454443838
- type: nauc_recall_at_10_diff1
value: 15.471367681837625
- type: nauc_recall_at_10_max
value: -30.21854343397064
- type: nauc_recall_at_1_diff1
value: 15.68698509866931
- type: nauc_recall_at_1_max
value: -38.49856837383727
- type: nauc_recall_at_20_diff1
value: 6.295179926244505
- type: nauc_recall_at_20_max
value: -39.20825139905824
- type: nauc_recall_at_3_diff1
value: 26.54887286634497
- type: nauc_recall_at_3_max
value: -39.46308906643022
- type: nauc_recall_at_5_diff1
value: 15.955004561636251
- type: nauc_recall_at_5_max
value: -36.491415148404585
- type: ndcg_at_1
value: 23.469
- type: ndcg_at_10
value: 16.07
- type: ndcg_at_100
value: 25.61
- type: ndcg_at_1000
value: 38.092999999999996
- type: ndcg_at_20
value: 17.980999999999998
- type: ndcg_at_3
value: 18.332
- type: ndcg_at_5
value: 17.302
- type: precision_at_1
value: 26.531
- type: precision_at_10
value: 14.285999999999998
- type: precision_at_100
value: 5.469
- type: precision_at_1000
value: 1.331
- type: precision_at_20
value: 12.449
- type: precision_at_3
value: 19.048000000000002
- type: precision_at_5
value: 17.551
- type: recall_at_1
value: 1.773
- type: recall_at_10
value: 10.698
- type: recall_at_100
value: 35.684
- type: recall_at_1000
value: 72.932
- type: recall_at_20
value: 18.723
- type: recall_at_3
value: 4.788
- type: recall_at_5
value: 6.715
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 63.38378906249999
- type: ap
value: 11.003129023708196
- type: ap_weighted
value: 11.003129023708196
- type: f1
value: 48.57435688911943
- type: f1_weighted
value: 71.50552650067205
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.467458970005666
- type: f1
value: 62.66021402025028
- type: f1_weighted
value: 61.71229183506532
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 44.96874447839597
- type: v_measures
value:
- 0.44089700334011683
- 0.4808871718296333
- 0.4655762216311635
- 0.4108889263207817
- 0.45449139982441744
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.83179352685224
- type: cos_sim_ap
value: 73.75870454252158
- type: cos_sim_f1
value: 67.61786600496278
- type: cos_sim_precision
value: 63.81733021077284
- type: cos_sim_recall
value: 71.89973614775725
- type: dot_accuracy
value: 85.83179352685224
- type: dot_ap
value: 73.75870536224079
- type: dot_f1
value: 67.61786600496278
- type: dot_precision
value: 63.81733021077284
- type: dot_recall
value: 71.89973614775725
- type: euclidean_accuracy
value: 85.83179352685224
- type: euclidean_ap
value: 73.7587242895193
- type: euclidean_f1
value: 67.61786600496278
- type: euclidean_precision
value: 63.81733021077284
- type: euclidean_recall
value: 71.89973614775725
- type: manhattan_accuracy
value: 85.91524110389224
- type: manhattan_ap
value: 73.77139111004601
- type: manhattan_f1
value: 67.52419453632244
- type: manhattan_precision
value: 63.023096272581746
- type: manhattan_recall
value: 72.71767810026385
- type: max_accuracy
value: 85.91524110389224
- type: max_ap
value: 73.77139111004601
- type: max_f1
value: 67.61786600496278
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.2711607870532
- type: cos_sim_ap
value: 86.48986874793765
- type: cos_sim_f1
value: 79.10280373831775
- type: cos_sim_precision
value: 76.87836070338614
- type: cos_sim_recall
value: 81.45980905451185
- type: dot_accuracy
value: 89.2711607870532
- type: dot_ap
value: 86.48986996838993
- type: dot_f1
value: 79.10280373831775
- type: dot_precision
value: 76.87836070338614
- type: dot_recall
value: 81.45980905451185
- type: euclidean_accuracy
value: 89.2711607870532
- type: euclidean_ap
value: 86.4898691356683
- type: euclidean_f1
value: 79.10280373831775
- type: euclidean_precision
value: 76.87836070338614
- type: euclidean_recall
value: 81.45980905451185
- type: manhattan_accuracy
value: 89.2711607870532
- type: manhattan_ap
value: 86.46475884590569
- type: manhattan_f1
value: 78.9534579927593
- type: manhattan_precision
value: 76.61716769286491
- type: manhattan_recall
value: 81.43671080997844
- type: max_accuracy
value: 89.2711607870532
- type: max_ap
value: 86.48986996838993
- type: max_f1
value: 79.10280373831775
---
# [bilingual-embedding-base](https://huggingface.co/Lajavaness/bilingual-embedding-base)
Bilingual-embedding is the Embedding Model for bilingual language: french and english. This model is a specialized sentence-embedding trained specifically for the bilingual language, leveraging the robust capabilities of [XLM-RoBERTa](https://huggingface.co/FacebookAI/xlm-roberta-base), a pre-trained language model based on the [XLM-RoBERTa](https://huggingface.co/FacebookAI/xlm-roberta-base) architecture. The model utilizes xlm-roberta to encode english-french sentences into a 1024-dimensional vector space, facilitating a wide range of applications from semantic search to text clustering. The embeddings capture the nuanced meanings of english-french sentences, reflecting both the lexical and contextual layers of the language.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BilingualModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Training and Fine-tuning process
#### Stage 1: NLI Training
- Dataset: [(SNLI+XNLI) for english+french]
- Method: Training using Multi-Negative Ranking Loss. This stage focused on improving the model's ability to discern and rank nuanced differences in sentence semantics.
### Stage 3: Continued Fine-tuning for Semantic Textual Similarity on STS Benchmark
- Dataset: [STSB-fr and en]
- Method: Fine-tuning specifically for the semantic textual similarity benchmark using Siamese BERT-Networks configured with the 'sentence-transformers' library.
### Stage 4: Advanced Augmentation Fine-tuning
- Dataset: STSB with generate [silver sample from gold sample](https://www.sbert.net/examples/training/data_augmentation/README.html)
- Method: Employed an advanced strategy using [Augmented SBERT](https://arxiv.org/abs/2010.08240) with Pair Sampling Strategies, integrating both Cross-Encoder and Bi-Encoder models. This stage further refined the embeddings by enriching the training data dynamically, enhancing the model's robustness and accuracy.
## Usage:
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Paris est une capitale de la France", "Paris is a capital of France"]
model = SentenceTransformer('Lajavaness/bilingual-embedding-base', trust_remote_code=True)
print(embeddings)
```
## Evaluation
TODO
## Citation
@article{conneau2019unsupervised,
title={Unsupervised cross-lingual representation learning at scale},
author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin},
journal={arXiv preprint arXiv:1911.02116},
year={2019}
}
@article{reimers2019sentence,
title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks},
author={Nils Reimers, Iryna Gurevych},
journal={https://arxiv.org/abs/1908.10084},
year={2019}
}
@article{thakur2020augmented,
title={Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks},
author={Thakur, Nandan and Reimers, Nils and Daxenberger, Johannes and Gurevych, Iryna},
journal={arXiv e-prints},
pages={arXiv--2010},
year={2020}
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
RichardErkhov/microsoft_-_Phi-3-mini-128k-instruct-8bits | RichardErkhov | null | [
"safetensors",
"phi3",
"custom_code",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,728 | 1,728 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3-mini-128k-instruct - bnb 8bits
- Model creator: https://huggingface.co/microsoft/
- Original model: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
🎉 **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) <br>
📖 [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) <br>
🛠️ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3)
| | Short Context | Long Context |
| :- | :- | :- |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## Release Notes
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback.
The model used additional post-training data leading to substantial gains on long-context understanding, instruction following, and structure output.
We also improve multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability.
We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications.
We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
These tables below highlights improvements on instruction following, structure output, reasoning, and long-context understanding of the new release on our public and internal benchmark datasets.
| Benchmarks | Original | June 2024 Update |
| :- | :- | :- |
| Instruction Extra Hard | 5.7 | 5.9 |
| Instruction Hard | 5.0 | 5.2 |
| JSON Structure Output | 1.9 | 60.1 |
| XML Structure Output | 47.8 | 52.9 |
| GPQA | 25.9 | 29.7 |
| MMLU | 68.1 | 69.7 |
| **Average** | **25.7** | **37.3** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
| :-------------------| :------| :------| :------| :------| :------| :------| :---------|
| Original | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | **68.8** |
| June 2024 Update | 92.4 | 91.1 | 90.8 | 87.9 | 79.8 | 65.6 | **84.6** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
| :-------------------| :--------| :-----| :------| :------| :------------| :---------|
| Original | 27 | 29 | 40 | 33 | 33 | **32.4** |
| June 2024 Update | 85 | 63 | 72 | 93 | 72 | **77** |
Notes: if users would like to check out the previous version, use the git commit id **bb5bf1e4001277a606e11debca0ef80323e5f824**. For the model conversion, e.g. GGUF and other formats, we invite the community to experiment with various approaches and share your valuable feedback. Let's innovate together!
## How to Use
Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.3) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.41.2
```
Phi-3 Mini-128K-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3)
### Tokenizer
Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
Question?<|end|>
<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful travel assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-128k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128K tokens
* GPUs: 512 H100-80G
* Training time: 10 days
* Training data: 4.9T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between May and June 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: June, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.9 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| Category | Benchmark | Phi-3-Mini-128K-Ins | Gemma-7B | Mistral-7B | Mixtral-8x7B | Llama-3-8B-Ins | GPT3.5-Turbo-1106 |
| :----------| :-----------| :---------------------| :----------| :------------| :--------------| :----------------| :-------------------|
| Popular aggregated benchmark | AGI Eval <br>5-shot| 39.5 | 42.1 | 35.1 | 45.2 | 42 | 48.4 |
| | MMLU <br>5-shot | 69.7 | 63.6 | 61.7 | 70.5 | 66.5 | 71.4 |
| | BigBench Hard <br>3-shot | 72.1 | 59.6 | 57.3 | 69.7 | 51.5 | 68.3 |
| Language Understanding | ANLI <br>7-shot | 52.3 | 48.7 | 47.1 | 55.2 | 57.3 | 58.1 |
| | HellaSwag <br>5-shot | 70.5 | 49.8 | 58.5 | 70.4 | 71.1 | 78.8 |
| Reasoning | ARC Challenge <br>10-shot | 85.5 | 78.3 | 78.6 | 87.3 | 82.8 | 87.4 |
| | BoolQ <br>0-shot | 77.1 | 66 | 72.2 | 76.6 | 80.9 | 79.1 |
| | MedQA <br>2-shot | 56.4 | 49.6 | 50 | 62.2 | 60.5 | 63.4 |
| | OpenBookQA <br>10-shot | 78.8 | 78.6 | 79.8 | 85.8 | 82.6 | 86 |
| | PIQA <br>5-shot | 80.1 | 78.1 | 77.7 | 86 | 75.7 | 86.6 |
| | GPQA <br>0-shot | 29.7 | 2.9 | 15 | 6.9 | 32.4 | 29.9 |
| | Social IQA <br>5-shot | 74.7 | 65.5 | 74.6 | 75.9 | 73.9 | 68.3 |
| | TruthfulQA (MC2) <br>10-shot | 64.8 | 52.1 | 53 | 60.1 | 63.2 | 67.7 |
| | WinoGrande <br>5-shot | 71.0 | 55.6 | 54.2 | 62 | 65 | 68.8 |
| Factual Knowledge | TriviaQA <br>5-shot | 57.8 | 72.3 | 75.2 | 82.2 | 67.7 | 85.8 |
| Math | GSM8K CoTT <br>8-shot | 85.3 | 59.8 | 46.4 | 64.7 | 77.4 | 78.1 |
| Code Generation | HumanEval <br>0-shot | 60.4 | 34.1 | 28.0 | 37.8 | 60.4 | 62.2 |
| | MBPP <br>3-shot | 70.0 | 51.5 | 50.8 | 60.2 | 67.7 | 77.8 |
| **Average** | | **66.4** | **56.0** | **56.4** | **64.4** | **65.5** | **70.3** |
**Long Context**: Phi-3 Mini-128K-Instruct supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA.
| Benchmark | Phi-3 Mini-128K-Instruct | Mistral-7B | Mixtral 8x7B | LLaMA-3-8B-Instruct |
| :---------------| :--------------------------|:------------|:--------------|:---------------------|
| GovReport | 25.3 | 4.9 | 20.3 | 10.3 |
| QMSum | 21.9 | 15.5 | 20.6 | 2.9 |
| Qasper | 41.6 | 23.5 | 26.6 | 8.1 |
| SQuALITY | 24.1 | 14.7 | 16.2 | 25 |
| SummScreenFD | 16.8 | 9.3 | 11.3 | 5.1 |
| **Average** | **25.9** | **13.6** | **19.0** | **10.3** |
We take a closer look at different categories across 100 public benchmark datasets at the table below:
| Category | Phi-3-Mini-128K-Instruct | Gemma-7B | Mistral-7B | Mixtral 8x7B | Llama-3-8B-Instruct | GPT-3.5-Turbo |
|:----------|:--------------------------|:----------|:------------|:--------------|:---------------------|:---------------|
| Popular aggregated benchmark | 60.6 | 59.4 | 56.5 | 66.2 | 59.9 | 67.0 |
| Reasoning | 69.4 | 60.3 | 62.8 | 68.1 | 69.6 | 71.7 |
| Language understanding | 57.5 | 57.6 | 52.5 | 66.1 | 63.2 | 67.7 |
| Code generation | 61.0 | 45.6 | 42.9 | 52.7 | 56.4 | 70.4 |
| Math | 51.6 | 35.8 | 25.4 | 40.3 | 41.1 | 52.8 |
| Factual knowledge | 35.8 | 46.7 | 49.8 | 58.6 | 43.1 | 63.4 |
| Multilingual | 56.4 | 66.5 | 57.4 | 66.7 | 66.6 | 71.0 |
| Robustness | 61.1 | 38.4 | 40.6 | 51.0 | 64.5 | 69.3 |
Overall, the model with only 3.8B-param achieves a similar level of language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on TriviaQA. However, we believe such weakness can be resolved by augmenting Phi-3-Mini with a search engine.
## Cross Platform Support
[ONNX runtime](https://onnxruntime.ai/blogs/accelerating-phi-3) now supports Phi-3 mini models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 mini across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3 Mini-128K-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"SUMMARIZATION"
] | [
"MEDQA"
] | Non_BioNLP |
FremyCompany/opus-mt-nl-en-healthcare | FremyCompany | translation | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"nl",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,681 | 1,686 | 15 | 2 | ---
language:
- nl
- en
license: mit
pipeline_tag: translation
---
# Opus-MT-NL-EN-Healthcare
This model can be used to translate medical text from Dutch to English, as a baseline for Clinical NLP systems in Dutch.
## Model Details
### Model Description
This model based on MarianMT was trained using OpusNLP data as well as specialized medical data (see paper for more details).
- **Developed by:** François Remy
- **Model type:** Translation
- **Languages:** From Dutch to English
- **License:** MIT
- **Finetuned from model:** Helsinki-NLP/opus-mt-nl-en
### Model Sources
- **Repository:** [Snomed Translation Dictionaries](https://github.com/FremyCompany/snomed-translate-dictionaries)
- **Paper:** [Taming large lexicons: translating clinical text using medical ontologies and sentence templates](https://biblio.ugent.be/publication/01GMN7J3X37CKWCKQV6DWGM4ZE/file/01GMN7NT30DKF37RXN31DSJZ2D.pdf)
## Citation
If you use this model as part of your work, please cite the following paper:
**BibTeX:**
```
@inproceedings{remy-et-al-2022-translating,
author = {{Remy, François and De Jaeger, P. and Demuynck, Kris}},
title = {{Taming large lexicons: translating clinical text using medical ontologies and sentence templates}},
booktitle = {{EmP 2022}}
location = {{Roeselare, Belgium}},
year = {{2022}},
language = {{en}},
pages = {{5}},
}
```
**APA:**
```
Remy, F., De Jaeger, P., & Demuynck, K. (2022). Taming large lexicons: translating clinical text using medical ontologies and sentence templates. Presented at the EmP 2022, The “Engineers meet Practicians” conference, Roeselare, Belgium.
```
| [
"TRANSLATION"
] | [
"MEDICAL DATA"
] | BioNLP |
fblgit/UNAversal-8x7B-v1beta | fblgit | text-generation | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"UNA",
"juanako",
"MoE",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,703 | 1,709 | 2,005 | 8 | ---
language:
- en
library_name: transformers
license: cc-by-nc-sa-4.0
tags:
- UNA
- juanako
- mixtral
- MoE
model-index:
- name: UNAversal-8x7B-v1beta
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.97
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
---
# UNAversal - Uniform Neural Alignment (MoE)
This is just a beta, a first release so people can start working on franksteins and so.
It does achieve high GSM/Math and TQA, so ideally you can merge it with other mixtrals and see what coming out of it
Based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
## UNA Details
For this model we came out with the most obvious, placing UNA on the router_logit. It does work, but we saw a much better performance on SFT by doing so.
So this model DOES have UNA-SFT phase, its highly experimental and it was merely using LLaMA-Factory datasets by example alpaca.
As the others:
- Can be finetuned further, try 2e-5 or **1e-4 (since its MOE)**
- Can be merged, here you will have to improvise and please report findings on a discussion thread.
**REMINDER**: please.. cite, it does help on the research and the lab itself, seriously.
## NEED YOUR HELP!!
I need a multi-turn trainloop for the Mixtral, that can squeeze the juice out of 8xH100's properly. Please feel free to reach @fblgit either discord or twitter. thanks!
# Evals
Here there are some, but we also submitted it to the HF eval queue....
## GSM8k 5-Shot
```
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.6603|± | 0.013|
```
## ARC 25-Shot
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml |none | 25|acc |0.6621|± |0.0138|
| | |none | 25|acc_norm|0.6962|± |0.0134|
```
## TruthfulQA 0-Shot (MC2)
```
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|-------|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7122|± |0.0141|
```
## 0-Shots Evals
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|-------|------|-----:|----------|-----:|---|-----:|
|arc_challenge |Yaml |none | 0|acc |0.6101|± |0.0143|
| | |none | 0|acc_norm |0.6425|± |0.0140|
|arc_easy |Yaml |none | 0|acc |0.8615|± |0.0071|
| | |none | 0|acc_norm |0.8375|± |0.0076|
|boolq |Yaml |none | 0|acc |0.8624|± |0.0060|
|lambada_openai|Yaml |none | 0|perplexity|2.8318|± |0.0507|
| | |none | 0|acc |0.7650|± |0.0059|
|mathqa |Yaml |none | 0|acc |0.4472|± |0.0091|
| | |none | 0|acc_norm |0.4436|± |0.0091|
|piqa |Yaml |none | 0|acc |0.8292|± |0.0088|
| | |none | 0|acc_norm |0.8422|± |0.0085|
|pubmedqa |Yaml |none | 0|acc |0.7920|± |0.0182|
|sciq |Yaml |none | 0|acc |0.9630|± |0.0060|
| | |none | 0|acc_norm |0.9370|± |0.0077|
```
## BBH at 0-Shot
```
vllm (pretrained=fblgit/UNAversal-8x7B-v1beta,tensor_parallel_size=2,data_parallel_size=4,gpu_memory_utilization=0.8,dtype=float16), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: auto
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|± |0.1772|
| - bbh_cot_fewshot_boolean_expressions |Yaml |get-answer| 0|exact_match|0.8840|± |0.0203|
| - bbh_cot_fewshot_causal_judgement |Yaml |get-answer| 0|exact_match|0.6417|± |0.0352|
| - bbh_cot_fewshot_date_understanding |Yaml |get-answer| 0|exact_match|0.7600|± |0.0271|
| - bbh_cot_fewshot_disambiguation_qa |Yaml |get-answer| 0|exact_match|0.7160|± |0.0286|
| - bbh_cot_fewshot_dyck_languages |Yaml |get-answer| 0|exact_match|0.1800|± |0.0243|
| - bbh_cot_fewshot_formal_fallacies |Yaml |get-answer| 0|exact_match|0.6520|± |0.0302|
| - bbh_cot_fewshot_geometric_shapes |Yaml |get-answer| 0|exact_match|0.3880|± |0.0309|
| - bbh_cot_fewshot_hyperbaton |Yaml |get-answer| 0|exact_match|0.9600|± |0.0124|
| - bbh_cot_fewshot_logical_deduction_five_objects |Yaml |get-answer| 0|exact_match|0.5360|± |0.0316|
| - bbh_cot_fewshot_logical_deduction_seven_objects |Yaml |get-answer| 0|exact_match|0.5040|± |0.0317|
| - bbh_cot_fewshot_logical_deduction_three_objects |Yaml |get-answer| 0|exact_match|0.8600|± |0.0220|
| - bbh_cot_fewshot_movie_recommendation |Yaml |get-answer| 0|exact_match|0.7840|± |0.0261|
| - bbh_cot_fewshot_multistep_arithmetic_two |Yaml |get-answer| 0|exact_match|0.6600|± |0.0300|
| - bbh_cot_fewshot_navigate |Yaml |get-answer| 0|exact_match|0.8160|± |0.0246|
| - bbh_cot_fewshot_object_counting |Yaml |get-answer| 0|exact_match|0.8360|± |0.0235|
| - bbh_cot_fewshot_penguins_in_a_table |Yaml |get-answer| 0|exact_match|0.7329|± |0.0367|
| - bbh_cot_fewshot_reasoning_about_colored_objects |Yaml |get-answer| 0|exact_match|0.8120|± |0.0248|
| - bbh_cot_fewshot_ruin_names |Yaml |get-answer| 0|exact_match|0.4440|± |0.0315|
| - bbh_cot_fewshot_salient_translation_error_detection |Yaml |get-answer| 0|exact_match|0.5200|± |0.0317|
| - bbh_cot_fewshot_snarks |Yaml |get-answer| 0|exact_match|0.7135|± |0.0340|
| - bbh_cot_fewshot_sports_understanding |Yaml |get-answer| 0|exact_match|0.9400|± |0.0151|
| - bbh_cot_fewshot_temporal_sequences |Yaml |get-answer| 0|exact_match|0.7560|± |0.0272|
| - bbh_cot_fewshot_tracking_shuffled_objects_five_objects |Yaml |get-answer| 0|exact_match|0.5680|± |0.0314|
| - bbh_cot_fewshot_tracking_shuffled_objects_seven_objects|Yaml |get-answer| 0|exact_match|0.6280|± |0.0306|
| - bbh_cot_fewshot_tracking_shuffled_objects_three_objects|Yaml |get-answer| 0|exact_match|0.6280|± |0.0306|
| - bbh_cot_fewshot_web_of_lies |Yaml |get-answer| 0|exact_match|0.9560|± |0.0130|
| - bbh_cot_fewshot_word_sorting |Yaml |get-answer| 0|exact_match|0.3800|± |0.0308|
|Groups|Version| Filter |n-shot| Metric |Value | |Stderr|
|------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|± |0.1772|
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__UNAversal-8x7B-v1beta)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.78|
|AI2 Reasoning Challenge (25-Shot)|69.80|
|HellaSwag (10-Shot) |86.90|
|MMLU (5-Shot) |70.39|
|TruthfulQA (0-shot) |71.97|
|Winogrande (5-shot) |82.00|
|GSM8k (5-shot) |61.64|
| [
"TRANSLATION"
] | [
"PUBMEDQA",
"SCIQ"
] | Non_BioNLP |
apple/DCLM-7B | apple | null | [
"transformers",
"safetensors",
"openlm",
"arxiv:2406.11794",
"license:apple-ascl",
"endpoints_compatible",
"region:us"
] | 1,720 | 1,721 | 759 | 835 | ---
license: apple-ascl
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/63118add64939fabc0108b28/BB42g4V8HTxb5dR4tcy8A.png" alt="DCLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for DCLM-Baseline-7B
DCLM-Baseline-7B is a 7 billion parameter language model trained on the DCLM-Baseline dataset, which was curated as part of the DataComp for Language Models (DCLM) benchmark. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.
## Model Details
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|------|-----------------|--------|-------------|-----------------|----------------|
| 7B | 2.5T | 32 | 4096 | 32 | 2048 |
### Model Description
- **Developed by:** DataComp for Language Models (DCLM) Team
- **Model type:** Decoder-only Transformer language model
- **Language(s):** English (primarily)
- **License:** Apple Sample Code License
- **Contact:** [email protected]
- **Date:** June 2024
### Model Sources
- **Repository:** https://github.com/mlfoundations/dclm
- **Dataset:** https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
- **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
## Using Model
First install open_lm
```bash
pip install git+https://github.com/mlfoundations/open_lm.git
```
Then:
```python
from open_lm.hf import *
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("apple/DCLM-Baseline-7B")
model = AutoModelForCausalLM.from_pretrained("apple/DCLM-Baseline-7B")
inputs = tokenizer(["Machine learning is"], return_tensors="pt")
gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
output = model.generate(inputs['input_ids'], **gen_kwargs)
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
print(output)
```
### Training Details
The model was trained using the following setup:
- **Architecture:** Decoder-only Transformer
- **Framework:** PyTorch with OpenLM
- **Optimizer:** AdamW
- **Learning Rate:** 2e-3 (peak)
- **Weight Decay:** 0.05
- **Batch Size:** 2048 sequences
- **Sequence Length:** 2048 tokens
- **Total Training Tokens:** 2.5T
- **Hardware:** Trained on H100 GPUs
For more detailed training information, please refer to Section 3.4 and Appendix F of the DCLM paper.
To ensure our trained model is broadly useful, including for math and coding tasks, we combine our 3.8T [DCLM-BASELINE](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) with the [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) and [ProofPile2](https://huggingface.co/datasets/EleutherAI/proof-pile-2) data to arrive at a 4.1T token dataset.
## Evaluation
Here are the evaluation results for DCLM-Baseline-7B on various tasks (using [llm-foundry](https://github.com/mosaicml/llm-foundry) eval suite)
| Task | Score |
|------|-------|
| MMLU (zero-shot) | 0.5766 |
| MMLU (few-shot) | 0.6372 |
| HellaSwag (zero-shot) | 0.7987 |
| HellaSwag | 0.8043 |
| Jeopardy | 0.4745 |
| TriviaQA | 0.5270 |
| GSM8K (CoT) | 0.0250 |
| AGI Eval SAT Math (CoT) | 0.0136 |
| AQuA (CoT) | 0.0490 |
| SVAMP (CoT) | 0.4900 |
| BigBench QA Wikidata | 0.7120 |
| ARC Easy | 0.8220 |
| ARC Challenge | 0.5990 |
| BigBench Misconceptions | 0.6986 |
| COPA | 0.8500 |
| SIQA | 0.8291 |
| CommonsenseQA | 0.8018 |
| PIQA | 0.8128 |
| OpenBookQA | 0.4540 |
| BigBench Novel Concepts | 0.7188 |
| BigBench Strange Stories | 0.7586 |
| BigBench Strategy QA | 0.6173 |
| LAMBADA | 0.8220 |
| Winograd | 0.8828 |
| Winogrande | 0.7269 |
| BigBench Conlang Translation | 0.0244 |
| BigBench Language Identification | 0.5219 |
| BigBench Conceptual Combinations | 0.6990 |
| BigBench Elementary Math QA | 0.3431 |
| BigBench Dyck Languages | 0.4930 |
| AGI Eval LSAT AR | 0.2435 |
| BigBench CS Algorithms | 0.6121 |
| BigBench Logical Deduction | 0.3620 |
| BigBench Operators | 0.4857 |
| BigBench Repeat Copy Logic | 0.4063 |
| Simple Arithmetic (no spaces) | 0.2940 |
| Simple Arithmetic (with spaces) | 0.3110 |
| MathQA | 0.3098 |
| LogiQA | 0.4132 |
| PubMedQA | 0.7060 |
| SQuAD | 0.5856 |
| AGI Eval LSAT RC | 0.6716 |
| AGI Eval LSAT LR | 0.5392 |
| CoQA | 0.4074 |
| BigBench Understanding Fables | 0.6825 |
| BoolQ | 0.8343 |
| AGI Eval SAT EN | 0.7670 |
| Winogender MC (Female) | 0.6000 |
| Winogender MC (Male) | 0.5500 |
| Enterprise PII Classification | 0.7676 |
| BBQ | 0.6912 |
| GPQA Main | 0.2612 |
| GPQA Diamond | 0.2475 |
Note: All scores are presented as decimal values between 0 and 1, representing the proportion of correct answers or the model's performance on each task.
## Comparison
Below are comparisions of this model with other models in the 7B regime.
| Model | Params | Tokens | Open dataset? | CORE | MMLU | EXTENDED |
|---------------|--------|--------|---------------|----------|----------|----------|
| **Open weights, closed datasets** | | | | | | |
| Llama2 | 7B | 2T | ❌ | 49.2 | 45.8 | 34.1 |
| DeepSeek | 7B | 2T | ❌ | 50.7 | 48.5 | 35.3 |
| Mistral-0.3 | 7B | ? | ❌ | 57.0 | 62.7 | 45.1 |
| QWEN-2 | 7B | ? | ❌ | 57.5 | **71.9** | 50.5 |
| Llama3 | 8B | 15T | ❌ | 57.6 | 66.2 | 46.3 |
| Gemma | 8B | 6T | ❌ | 57.8 | 64.3 | 44.6 |
| Phi-3 | 7B | ? | ❌ | **61.0** | 69.9 | **57.9** |
| **Open weights, open datasets** | | | | | | |
| Falcon | 7B | 1T | ✅ | 44.1 | 27.4 | 25.1 |
| OLMo-1.7 | 7B | 2.1T | ✅ | 47.0 | 54.0 | 34.2 |
| MAP-Neo | 7B | 4.5T | ✅ | **50.2** | **57.1** | **40.4** |
| **DCLM-7B** | 7B | 2.5T | ✅ | **56.1** | **63.7** | **43.6** |
## Limitations and Biases
While DCLM-Baseline-7B demonstrates strong performance across a range of tasks, it's important to note:
1. The model may exhibit biases present in its training data, which is derived from web crawl data.
2. It has not undergone specific alignment or safety fine-tuning, so outputs should be used with caution.
3. Performance on tasks not included in the evaluation suite may vary.
4. The model's knowledge is limited to its training data cutoff date.
## Ethical Considerations
Users should be aware that this model, like all large language models, can potentially generate harmful or biased content. It should not be used for making decisions about individuals or in sensitive applications without appropriate safeguards and human oversight.
## Citation
If you use this model in your research, please cite:
```
@article{Li2024DataCompLM,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and [... full author list]},
journal={arXiv preprint arXiv:2406.11794},
year={2024}
}
```
| [
"TRANSLATION"
] | [
"PUBMEDQA"
] | Non_BioNLP |
Plasmoxy/bge-micro-v2-Q4_K_M-GGUF | Plasmoxy | sentence-similarity | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"llama-cpp",
"gguf-my-repo",
"base_model:TaylorAI/bge-micro-v2",
"base_model:quantized:TaylorAI/bge-micro-v2",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,739 | 1,739 | 26 | 1 | ---
base_model: TaylorAI/bge-micro-v2
license: mit
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
- llama-cpp
- gguf-my-repo
model-index:
- name: bge_micro
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 67.76119402985074
- type: ap
value: 29.637849284211114
- type: f1
value: 61.31181187111905
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 79.7547
- type: ap
value: 74.21401629809145
- type: f1
value: 79.65319615433783
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.452000000000005
- type: f1
value: 37.0245198854966
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.152
- type: map_at_10
value: 46.702
- type: map_at_100
value: 47.563
- type: map_at_1000
value: 47.567
- type: map_at_3
value: 42.058
- type: map_at_5
value: 44.608
- type: mrr_at_1
value: 32.006
- type: mrr_at_10
value: 47.064
- type: mrr_at_100
value: 47.910000000000004
- type: mrr_at_1000
value: 47.915
- type: mrr_at_3
value: 42.283
- type: mrr_at_5
value: 44.968
- type: ndcg_at_1
value: 31.152
- type: ndcg_at_10
value: 55.308
- type: ndcg_at_100
value: 58.965
- type: ndcg_at_1000
value: 59.067
- type: ndcg_at_3
value: 45.698
- type: ndcg_at_5
value: 50.296
- type: precision_at_1
value: 31.152
- type: precision_at_10
value: 8.279
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.753
- type: precision_at_5
value: 13.485
- type: recall_at_1
value: 31.152
- type: recall_at_10
value: 82.788
- type: recall_at_100
value: 98.72
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 56.259
- type: recall_at_5
value: 67.425
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.52692241938116
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 33.245710292773595
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.08493637155168
- type: mrr
value: 71.94378490084861
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.1602804378326
- type: cos_sim_spearman
value: 82.92478106365587
- type: euclidean_pearson
value: 82.27930167277077
- type: euclidean_spearman
value: 82.18560759458093
- type: manhattan_pearson
value: 82.34277425888187
- type: manhattan_spearman
value: 81.72776583704467
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 81.17207792207792
- type: f1
value: 81.09893836310513
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 36.109308463095516
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.06048212317168
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 28.233999999999998
- type: map_at_10
value: 38.092999999999996
- type: map_at_100
value: 39.473
- type: map_at_1000
value: 39.614
- type: map_at_3
value: 34.839
- type: map_at_5
value: 36.523
- type: mrr_at_1
value: 35.193000000000005
- type: mrr_at_10
value: 44.089
- type: mrr_at_100
value: 44.927
- type: mrr_at_1000
value: 44.988
- type: mrr_at_3
value: 41.559000000000005
- type: mrr_at_5
value: 43.162
- type: ndcg_at_1
value: 35.193000000000005
- type: ndcg_at_10
value: 44.04
- type: ndcg_at_100
value: 49.262
- type: ndcg_at_1000
value: 51.847
- type: ndcg_at_3
value: 39.248
- type: ndcg_at_5
value: 41.298
- type: precision_at_1
value: 35.193000000000005
- type: precision_at_10
value: 8.555
- type: precision_at_100
value: 1.3820000000000001
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 19.123
- type: precision_at_5
value: 13.648
- type: recall_at_1
value: 28.233999999999998
- type: recall_at_10
value: 55.094
- type: recall_at_100
value: 76.85300000000001
- type: recall_at_1000
value: 94.163
- type: recall_at_3
value: 40.782000000000004
- type: recall_at_5
value: 46.796
- type: map_at_1
value: 21.538
- type: map_at_10
value: 28.449
- type: map_at_100
value: 29.471000000000004
- type: map_at_1000
value: 29.599999999999998
- type: map_at_3
value: 26.371
- type: map_at_5
value: 27.58
- type: mrr_at_1
value: 26.815
- type: mrr_at_10
value: 33.331
- type: mrr_at_100
value: 34.114
- type: mrr_at_1000
value: 34.182
- type: mrr_at_3
value: 31.561
- type: mrr_at_5
value: 32.608
- type: ndcg_at_1
value: 26.815
- type: ndcg_at_10
value: 32.67
- type: ndcg_at_100
value: 37.039
- type: ndcg_at_1000
value: 39.769
- type: ndcg_at_3
value: 29.523
- type: ndcg_at_5
value: 31.048
- type: precision_at_1
value: 26.815
- type: precision_at_10
value: 5.955
- type: precision_at_100
value: 1.02
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 14.033999999999999
- type: precision_at_5
value: 9.911
- type: recall_at_1
value: 21.538
- type: recall_at_10
value: 40.186
- type: recall_at_100
value: 58.948
- type: recall_at_1000
value: 77.158
- type: recall_at_3
value: 30.951
- type: recall_at_5
value: 35.276
- type: map_at_1
value: 35.211999999999996
- type: map_at_10
value: 46.562
- type: map_at_100
value: 47.579
- type: map_at_1000
value: 47.646
- type: map_at_3
value: 43.485
- type: map_at_5
value: 45.206
- type: mrr_at_1
value: 40.627
- type: mrr_at_10
value: 49.928
- type: mrr_at_100
value: 50.647
- type: mrr_at_1000
value: 50.685
- type: mrr_at_3
value: 47.513
- type: mrr_at_5
value: 48.958
- type: ndcg_at_1
value: 40.627
- type: ndcg_at_10
value: 52.217
- type: ndcg_at_100
value: 56.423
- type: ndcg_at_1000
value: 57.821999999999996
- type: ndcg_at_3
value: 46.949000000000005
- type: ndcg_at_5
value: 49.534
- type: precision_at_1
value: 40.627
- type: precision_at_10
value: 8.476
- type: precision_at_100
value: 1.15
- type: precision_at_1000
value: 0.132
- type: precision_at_3
value: 21.003
- type: precision_at_5
value: 14.469999999999999
- type: recall_at_1
value: 35.211999999999996
- type: recall_at_10
value: 65.692
- type: recall_at_100
value: 84.011
- type: recall_at_1000
value: 94.03099999999999
- type: recall_at_3
value: 51.404
- type: recall_at_5
value: 57.882
- type: map_at_1
value: 22.09
- type: map_at_10
value: 29.516
- type: map_at_100
value: 30.462
- type: map_at_1000
value: 30.56
- type: map_at_3
value: 26.945000000000004
- type: map_at_5
value: 28.421999999999997
- type: mrr_at_1
value: 23.616
- type: mrr_at_10
value: 31.221
- type: mrr_at_100
value: 32.057
- type: mrr_at_1000
value: 32.137
- type: mrr_at_3
value: 28.738000000000003
- type: mrr_at_5
value: 30.156
- type: ndcg_at_1
value: 23.616
- type: ndcg_at_10
value: 33.97
- type: ndcg_at_100
value: 38.806000000000004
- type: ndcg_at_1000
value: 41.393
- type: ndcg_at_3
value: 28.908
- type: ndcg_at_5
value: 31.433
- type: precision_at_1
value: 23.616
- type: precision_at_10
value: 5.299
- type: precision_at_100
value: 0.812
- type: precision_at_1000
value: 0.107
- type: precision_at_3
value: 12.015
- type: precision_at_5
value: 8.701
- type: recall_at_1
value: 22.09
- type: recall_at_10
value: 46.089999999999996
- type: recall_at_100
value: 68.729
- type: recall_at_1000
value: 88.435
- type: recall_at_3
value: 32.584999999999994
- type: recall_at_5
value: 38.550000000000004
- type: map_at_1
value: 15.469
- type: map_at_10
value: 22.436
- type: map_at_100
value: 23.465
- type: map_at_1000
value: 23.608999999999998
- type: map_at_3
value: 19.716
- type: map_at_5
value: 21.182000000000002
- type: mrr_at_1
value: 18.905
- type: mrr_at_10
value: 26.55
- type: mrr_at_100
value: 27.46
- type: mrr_at_1000
value: 27.553
- type: mrr_at_3
value: 23.921999999999997
- type: mrr_at_5
value: 25.302999999999997
- type: ndcg_at_1
value: 18.905
- type: ndcg_at_10
value: 27.437
- type: ndcg_at_100
value: 32.555
- type: ndcg_at_1000
value: 35.885
- type: ndcg_at_3
value: 22.439
- type: ndcg_at_5
value: 24.666
- type: precision_at_1
value: 18.905
- type: precision_at_10
value: 5.2490000000000006
- type: precision_at_100
value: 0.889
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 10.862
- type: precision_at_5
value: 8.085
- type: recall_at_1
value: 15.469
- type: recall_at_10
value: 38.706
- type: recall_at_100
value: 61.242
- type: recall_at_1000
value: 84.84
- type: recall_at_3
value: 24.973
- type: recall_at_5
value: 30.603
- type: map_at_1
value: 24.918000000000003
- type: map_at_10
value: 34.296
- type: map_at_100
value: 35.632000000000005
- type: map_at_1000
value: 35.748999999999995
- type: map_at_3
value: 31.304
- type: map_at_5
value: 33.166000000000004
- type: mrr_at_1
value: 30.703000000000003
- type: mrr_at_10
value: 39.655
- type: mrr_at_100
value: 40.569
- type: mrr_at_1000
value: 40.621
- type: mrr_at_3
value: 37.023
- type: mrr_at_5
value: 38.664
- type: ndcg_at_1
value: 30.703000000000003
- type: ndcg_at_10
value: 39.897
- type: ndcg_at_100
value: 45.777
- type: ndcg_at_1000
value: 48.082
- type: ndcg_at_3
value: 35.122
- type: ndcg_at_5
value: 37.691
- type: precision_at_1
value: 30.703000000000003
- type: precision_at_10
value: 7.305000000000001
- type: precision_at_100
value: 1.208
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 16.811
- type: precision_at_5
value: 12.203999999999999
- type: recall_at_1
value: 24.918000000000003
- type: recall_at_10
value: 51.31
- type: recall_at_100
value: 76.534
- type: recall_at_1000
value: 91.911
- type: recall_at_3
value: 37.855
- type: recall_at_5
value: 44.493
- type: map_at_1
value: 22.416
- type: map_at_10
value: 30.474
- type: map_at_100
value: 31.759999999999998
- type: map_at_1000
value: 31.891000000000002
- type: map_at_3
value: 27.728
- type: map_at_5
value: 29.247
- type: mrr_at_1
value: 28.881
- type: mrr_at_10
value: 36.418
- type: mrr_at_100
value: 37.347
- type: mrr_at_1000
value: 37.415
- type: mrr_at_3
value: 33.942
- type: mrr_at_5
value: 35.386
- type: ndcg_at_1
value: 28.881
- type: ndcg_at_10
value: 35.812
- type: ndcg_at_100
value: 41.574
- type: ndcg_at_1000
value: 44.289
- type: ndcg_at_3
value: 31.239
- type: ndcg_at_5
value: 33.302
- type: precision_at_1
value: 28.881
- type: precision_at_10
value: 6.598
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 14.954
- type: precision_at_5
value: 10.776
- type: recall_at_1
value: 22.416
- type: recall_at_10
value: 46.243
- type: recall_at_100
value: 71.352
- type: recall_at_1000
value: 90.034
- type: recall_at_3
value: 32.873000000000005
- type: recall_at_5
value: 38.632
- type: map_at_1
value: 22.528166666666667
- type: map_at_10
value: 30.317833333333333
- type: map_at_100
value: 31.44108333333333
- type: map_at_1000
value: 31.566666666666666
- type: map_at_3
value: 27.84425
- type: map_at_5
value: 29.233333333333334
- type: mrr_at_1
value: 26.75733333333333
- type: mrr_at_10
value: 34.24425
- type: mrr_at_100
value: 35.11375
- type: mrr_at_1000
value: 35.184333333333335
- type: mrr_at_3
value: 32.01225
- type: mrr_at_5
value: 33.31225
- type: ndcg_at_1
value: 26.75733333333333
- type: ndcg_at_10
value: 35.072583333333334
- type: ndcg_at_100
value: 40.13358333333334
- type: ndcg_at_1000
value: 42.81825
- type: ndcg_at_3
value: 30.79275000000001
- type: ndcg_at_5
value: 32.822
- type: precision_at_1
value: 26.75733333333333
- type: precision_at_10
value: 6.128083333333334
- type: precision_at_100
value: 1.019
- type: precision_at_1000
value: 0.14391666666666664
- type: precision_at_3
value: 14.129916666666665
- type: precision_at_5
value: 10.087416666666668
- type: recall_at_1
value: 22.528166666666667
- type: recall_at_10
value: 45.38341666666667
- type: recall_at_100
value: 67.81791666666668
- type: recall_at_1000
value: 86.71716666666666
- type: recall_at_3
value: 33.38741666666667
- type: recall_at_5
value: 38.62041666666667
- type: map_at_1
value: 21.975
- type: map_at_10
value: 28.144999999999996
- type: map_at_100
value: 28.994999999999997
- type: map_at_1000
value: 29.086000000000002
- type: map_at_3
value: 25.968999999999998
- type: map_at_5
value: 27.321
- type: mrr_at_1
value: 25
- type: mrr_at_10
value: 30.822
- type: mrr_at_100
value: 31.647
- type: mrr_at_1000
value: 31.712
- type: mrr_at_3
value: 28.860000000000003
- type: mrr_at_5
value: 30.041
- type: ndcg_at_1
value: 25
- type: ndcg_at_10
value: 31.929999999999996
- type: ndcg_at_100
value: 36.258
- type: ndcg_at_1000
value: 38.682
- type: ndcg_at_3
value: 27.972
- type: ndcg_at_5
value: 30.089
- type: precision_at_1
value: 25
- type: precision_at_10
value: 4.923
- type: precision_at_100
value: 0.767
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 11.860999999999999
- type: precision_at_5
value: 8.466
- type: recall_at_1
value: 21.975
- type: recall_at_10
value: 41.102
- type: recall_at_100
value: 60.866
- type: recall_at_1000
value: 78.781
- type: recall_at_3
value: 30.268
- type: recall_at_5
value: 35.552
- type: map_at_1
value: 15.845999999999998
- type: map_at_10
value: 21.861
- type: map_at_100
value: 22.798
- type: map_at_1000
value: 22.925
- type: map_at_3
value: 19.922
- type: map_at_5
value: 21.054000000000002
- type: mrr_at_1
value: 19.098000000000003
- type: mrr_at_10
value: 25.397
- type: mrr_at_100
value: 26.246000000000002
- type: mrr_at_1000
value: 26.33
- type: mrr_at_3
value: 23.469
- type: mrr_at_5
value: 24.646
- type: ndcg_at_1
value: 19.098000000000003
- type: ndcg_at_10
value: 25.807999999999996
- type: ndcg_at_100
value: 30.445
- type: ndcg_at_1000
value: 33.666000000000004
- type: ndcg_at_3
value: 22.292
- type: ndcg_at_5
value: 24.075
- type: precision_at_1
value: 19.098000000000003
- type: precision_at_10
value: 4.58
- type: precision_at_100
value: 0.8099999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 10.346
- type: precision_at_5
value: 7.542999999999999
- type: recall_at_1
value: 15.845999999999998
- type: recall_at_10
value: 34.172999999999995
- type: recall_at_100
value: 55.24099999999999
- type: recall_at_1000
value: 78.644
- type: recall_at_3
value: 24.401
- type: recall_at_5
value: 28.938000000000002
- type: map_at_1
value: 22.974
- type: map_at_10
value: 30.108
- type: map_at_100
value: 31.208000000000002
- type: map_at_1000
value: 31.330999999999996
- type: map_at_3
value: 27.889999999999997
- type: map_at_5
value: 29.023
- type: mrr_at_1
value: 26.493
- type: mrr_at_10
value: 33.726
- type: mrr_at_100
value: 34.622
- type: mrr_at_1000
value: 34.703
- type: mrr_at_3
value: 31.575999999999997
- type: mrr_at_5
value: 32.690999999999995
- type: ndcg_at_1
value: 26.493
- type: ndcg_at_10
value: 34.664
- type: ndcg_at_100
value: 39.725
- type: ndcg_at_1000
value: 42.648
- type: ndcg_at_3
value: 30.447999999999997
- type: ndcg_at_5
value: 32.145
- type: precision_at_1
value: 26.493
- type: precision_at_10
value: 5.7090000000000005
- type: precision_at_100
value: 0.9199999999999999
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 13.464
- type: precision_at_5
value: 9.384
- type: recall_at_1
value: 22.974
- type: recall_at_10
value: 45.097
- type: recall_at_100
value: 66.908
- type: recall_at_1000
value: 87.495
- type: recall_at_3
value: 33.338
- type: recall_at_5
value: 37.499
- type: map_at_1
value: 22.408
- type: map_at_10
value: 29.580000000000002
- type: map_at_100
value: 31.145
- type: map_at_1000
value: 31.369000000000003
- type: map_at_3
value: 27.634999999999998
- type: map_at_5
value: 28.766000000000002
- type: mrr_at_1
value: 27.272999999999996
- type: mrr_at_10
value: 33.93
- type: mrr_at_100
value: 34.963
- type: mrr_at_1000
value: 35.031
- type: mrr_at_3
value: 32.016
- type: mrr_at_5
value: 33.221000000000004
- type: ndcg_at_1
value: 27.272999999999996
- type: ndcg_at_10
value: 33.993
- type: ndcg_at_100
value: 40.333999999999996
- type: ndcg_at_1000
value: 43.361
- type: ndcg_at_3
value: 30.918
- type: ndcg_at_5
value: 32.552
- type: precision_at_1
value: 27.272999999999996
- type: precision_at_10
value: 6.285
- type: precision_at_100
value: 1.389
- type: precision_at_1000
value: 0.232
- type: precision_at_3
value: 14.427000000000001
- type: precision_at_5
value: 10.356
- type: recall_at_1
value: 22.408
- type: recall_at_10
value: 41.318
- type: recall_at_100
value: 70.539
- type: recall_at_1000
value: 90.197
- type: recall_at_3
value: 32.513
- type: recall_at_5
value: 37
- type: map_at_1
value: 17.258000000000003
- type: map_at_10
value: 24.294
- type: map_at_100
value: 25.305
- type: map_at_1000
value: 25.419999999999998
- type: map_at_3
value: 22.326999999999998
- type: map_at_5
value: 23.31
- type: mrr_at_1
value: 18.484
- type: mrr_at_10
value: 25.863999999999997
- type: mrr_at_100
value: 26.766000000000002
- type: mrr_at_1000
value: 26.855
- type: mrr_at_3
value: 23.968
- type: mrr_at_5
value: 24.911
- type: ndcg_at_1
value: 18.484
- type: ndcg_at_10
value: 28.433000000000003
- type: ndcg_at_100
value: 33.405
- type: ndcg_at_1000
value: 36.375
- type: ndcg_at_3
value: 24.455
- type: ndcg_at_5
value: 26.031
- type: precision_at_1
value: 18.484
- type: precision_at_10
value: 4.603
- type: precision_at_100
value: 0.773
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 10.659
- type: precision_at_5
value: 7.505000000000001
- type: recall_at_1
value: 17.258000000000003
- type: recall_at_10
value: 39.589999999999996
- type: recall_at_100
value: 62.592000000000006
- type: recall_at_1000
value: 84.917
- type: recall_at_3
value: 28.706
- type: recall_at_5
value: 32.224000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.578999999999999
- type: map_at_10
value: 17.642
- type: map_at_100
value: 19.451
- type: map_at_1000
value: 19.647000000000002
- type: map_at_3
value: 14.618
- type: map_at_5
value: 16.145
- type: mrr_at_1
value: 23.322000000000003
- type: mrr_at_10
value: 34.204
- type: mrr_at_100
value: 35.185
- type: mrr_at_1000
value: 35.235
- type: mrr_at_3
value: 30.847
- type: mrr_at_5
value: 32.824
- type: ndcg_at_1
value: 23.322000000000003
- type: ndcg_at_10
value: 25.352999999999998
- type: ndcg_at_100
value: 32.574
- type: ndcg_at_1000
value: 36.073
- type: ndcg_at_3
value: 20.318
- type: ndcg_at_5
value: 22.111
- type: precision_at_1
value: 23.322000000000003
- type: precision_at_10
value: 8.02
- type: precision_at_100
value: 1.5730000000000002
- type: precision_at_1000
value: 0.22200000000000003
- type: precision_at_3
value: 15.049000000000001
- type: precision_at_5
value: 11.87
- type: recall_at_1
value: 10.578999999999999
- type: recall_at_10
value: 30.964999999999996
- type: recall_at_100
value: 55.986000000000004
- type: recall_at_1000
value: 75.565
- type: recall_at_3
value: 18.686
- type: recall_at_5
value: 23.629
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.327
- type: map_at_10
value: 14.904
- type: map_at_100
value: 20.29
- type: map_at_1000
value: 21.42
- type: map_at_3
value: 10.911
- type: map_at_5
value: 12.791
- type: mrr_at_1
value: 57.25
- type: mrr_at_10
value: 66.62700000000001
- type: mrr_at_100
value: 67.035
- type: mrr_at_1000
value: 67.052
- type: mrr_at_3
value: 64.833
- type: mrr_at_5
value: 65.908
- type: ndcg_at_1
value: 43.75
- type: ndcg_at_10
value: 32.246
- type: ndcg_at_100
value: 35.774
- type: ndcg_at_1000
value: 42.872
- type: ndcg_at_3
value: 36.64
- type: ndcg_at_5
value: 34.487
- type: precision_at_1
value: 57.25
- type: precision_at_10
value: 25.924999999999997
- type: precision_at_100
value: 7.670000000000001
- type: precision_at_1000
value: 1.599
- type: precision_at_3
value: 41.167
- type: precision_at_5
value: 34.65
- type: recall_at_1
value: 7.327
- type: recall_at_10
value: 19.625
- type: recall_at_100
value: 41.601
- type: recall_at_1000
value: 65.117
- type: recall_at_3
value: 12.308
- type: recall_at_5
value: 15.437999999999999
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 44.53
- type: f1
value: 39.39884255816736
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.913000000000004
- type: map_at_10
value: 69.592
- type: map_at_100
value: 69.95599999999999
- type: map_at_1000
value: 69.973
- type: map_at_3
value: 67.716
- type: map_at_5
value: 68.899
- type: mrr_at_1
value: 63.561
- type: mrr_at_10
value: 74.2
- type: mrr_at_100
value: 74.468
- type: mrr_at_1000
value: 74.47500000000001
- type: mrr_at_3
value: 72.442
- type: mrr_at_5
value: 73.58
- type: ndcg_at_1
value: 63.561
- type: ndcg_at_10
value: 74.988
- type: ndcg_at_100
value: 76.52799999999999
- type: ndcg_at_1000
value: 76.88000000000001
- type: ndcg_at_3
value: 71.455
- type: ndcg_at_5
value: 73.42699999999999
- type: precision_at_1
value: 63.561
- type: precision_at_10
value: 9.547
- type: precision_at_100
value: 1.044
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 28.143
- type: precision_at_5
value: 18.008
- type: recall_at_1
value: 58.913000000000004
- type: recall_at_10
value: 87.18
- type: recall_at_100
value: 93.852
- type: recall_at_1000
value: 96.256
- type: recall_at_3
value: 77.55199999999999
- type: recall_at_5
value: 82.42399999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 11.761000000000001
- type: map_at_10
value: 19.564999999999998
- type: map_at_100
value: 21.099
- type: map_at_1000
value: 21.288999999999998
- type: map_at_3
value: 16.683999999999997
- type: map_at_5
value: 18.307000000000002
- type: mrr_at_1
value: 23.302
- type: mrr_at_10
value: 30.979
- type: mrr_at_100
value: 32.121
- type: mrr_at_1000
value: 32.186
- type: mrr_at_3
value: 28.549000000000003
- type: mrr_at_5
value: 30.038999999999998
- type: ndcg_at_1
value: 23.302
- type: ndcg_at_10
value: 25.592
- type: ndcg_at_100
value: 32.416
- type: ndcg_at_1000
value: 36.277
- type: ndcg_at_3
value: 22.151
- type: ndcg_at_5
value: 23.483999999999998
- type: precision_at_1
value: 23.302
- type: precision_at_10
value: 7.377000000000001
- type: precision_at_100
value: 1.415
- type: precision_at_1000
value: 0.212
- type: precision_at_3
value: 14.712
- type: precision_at_5
value: 11.358
- type: recall_at_1
value: 11.761000000000001
- type: recall_at_10
value: 31.696
- type: recall_at_100
value: 58.01500000000001
- type: recall_at_1000
value: 81.572
- type: recall_at_3
value: 20.742
- type: recall_at_5
value: 25.707
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.275
- type: map_at_10
value: 44.712
- type: map_at_100
value: 45.621
- type: map_at_1000
value: 45.698
- type: map_at_3
value: 42.016999999999996
- type: map_at_5
value: 43.659
- type: mrr_at_1
value: 64.551
- type: mrr_at_10
value: 71.58099999999999
- type: mrr_at_100
value: 71.952
- type: mrr_at_1000
value: 71.96900000000001
- type: mrr_at_3
value: 70.236
- type: mrr_at_5
value: 71.051
- type: ndcg_at_1
value: 64.551
- type: ndcg_at_10
value: 53.913999999999994
- type: ndcg_at_100
value: 57.421
- type: ndcg_at_1000
value: 59.06
- type: ndcg_at_3
value: 49.716
- type: ndcg_at_5
value: 51.971999999999994
- type: precision_at_1
value: 64.551
- type: precision_at_10
value: 11.110000000000001
- type: precision_at_100
value: 1.388
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 30.822
- type: precision_at_5
value: 20.273
- type: recall_at_1
value: 32.275
- type: recall_at_10
value: 55.55
- type: recall_at_100
value: 69.38600000000001
- type: recall_at_1000
value: 80.35799999999999
- type: recall_at_3
value: 46.232
- type: recall_at_5
value: 50.682
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 76.4604
- type: ap
value: 70.40498168422701
- type: f1
value: 76.38572688476046
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 15.065999999999999
- type: map_at_10
value: 25.058000000000003
- type: map_at_100
value: 26.268
- type: map_at_1000
value: 26.344
- type: map_at_3
value: 21.626
- type: map_at_5
value: 23.513
- type: mrr_at_1
value: 15.501000000000001
- type: mrr_at_10
value: 25.548
- type: mrr_at_100
value: 26.723000000000003
- type: mrr_at_1000
value: 26.793
- type: mrr_at_3
value: 22.142
- type: mrr_at_5
value: 24.024
- type: ndcg_at_1
value: 15.501000000000001
- type: ndcg_at_10
value: 31.008000000000003
- type: ndcg_at_100
value: 37.08
- type: ndcg_at_1000
value: 39.102
- type: ndcg_at_3
value: 23.921999999999997
- type: ndcg_at_5
value: 27.307
- type: precision_at_1
value: 15.501000000000001
- type: precision_at_10
value: 5.155
- type: precision_at_100
value: 0.822
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 10.363
- type: precision_at_5
value: 7.917000000000001
- type: recall_at_1
value: 15.065999999999999
- type: recall_at_10
value: 49.507
- type: recall_at_100
value: 78.118
- type: recall_at_1000
value: 93.881
- type: recall_at_3
value: 30.075000000000003
- type: recall_at_5
value: 38.222
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.6703146374829
- type: f1
value: 90.1258004293966
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 68.29229366165072
- type: f1
value: 50.016194478997875
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.57767316745124
- type: f1
value: 67.16194062146954
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.92064559515804
- type: f1
value: 73.6680729569968
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.56335607367883
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.131807833734268
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.07390328719844
- type: mrr
value: 32.117370992867905
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.274
- type: map_at_10
value: 11.489
- type: map_at_100
value: 14.518
- type: map_at_1000
value: 15.914
- type: map_at_3
value: 8.399
- type: map_at_5
value: 9.889000000000001
- type: mrr_at_1
value: 42.724000000000004
- type: mrr_at_10
value: 51.486
- type: mrr_at_100
value: 51.941
- type: mrr_at_1000
value: 51.99
- type: mrr_at_3
value: 49.278
- type: mrr_at_5
value: 50.485
- type: ndcg_at_1
value: 39.938
- type: ndcg_at_10
value: 31.862000000000002
- type: ndcg_at_100
value: 29.235
- type: ndcg_at_1000
value: 37.802
- type: ndcg_at_3
value: 35.754999999999995
- type: ndcg_at_5
value: 34.447
- type: precision_at_1
value: 42.105
- type: precision_at_10
value: 23.901
- type: precision_at_100
value: 7.715
- type: precision_at_1000
value: 2.045
- type: precision_at_3
value: 33.437
- type: precision_at_5
value: 29.782999999999998
- type: recall_at_1
value: 5.274
- type: recall_at_10
value: 15.351
- type: recall_at_100
value: 29.791
- type: recall_at_1000
value: 60.722
- type: recall_at_3
value: 9.411
- type: recall_at_5
value: 12.171999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.099
- type: map_at_10
value: 27.913
- type: map_at_100
value: 29.281000000000002
- type: map_at_1000
value: 29.343999999999998
- type: map_at_3
value: 23.791
- type: map_at_5
value: 26.049
- type: mrr_at_1
value: 18.337
- type: mrr_at_10
value: 29.953999999999997
- type: mrr_at_100
value: 31.080999999999996
- type: mrr_at_1000
value: 31.130000000000003
- type: mrr_at_3
value: 26.168000000000003
- type: mrr_at_5
value: 28.277
- type: ndcg_at_1
value: 18.308
- type: ndcg_at_10
value: 34.938
- type: ndcg_at_100
value: 41.125
- type: ndcg_at_1000
value: 42.708
- type: ndcg_at_3
value: 26.805
- type: ndcg_at_5
value: 30.686999999999998
- type: precision_at_1
value: 18.308
- type: precision_at_10
value: 6.476999999999999
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 12.784999999999998
- type: precision_at_5
value: 9.878
- type: recall_at_1
value: 16.099
- type: recall_at_10
value: 54.63
- type: recall_at_100
value: 82.24900000000001
- type: recall_at_1000
value: 94.242
- type: recall_at_3
value: 33.174
- type: recall_at_5
value: 42.164
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.947
- type: map_at_10
value: 81.499
- type: map_at_100
value: 82.17
- type: map_at_1000
value: 82.194
- type: map_at_3
value: 78.567
- type: map_at_5
value: 80.34400000000001
- type: mrr_at_1
value: 78.18
- type: mrr_at_10
value: 85.05
- type: mrr_at_100
value: 85.179
- type: mrr_at_1000
value: 85.181
- type: mrr_at_3
value: 83.91
- type: mrr_at_5
value: 84.638
- type: ndcg_at_1
value: 78.2
- type: ndcg_at_10
value: 85.715
- type: ndcg_at_100
value: 87.2
- type: ndcg_at_1000
value: 87.39
- type: ndcg_at_3
value: 82.572
- type: ndcg_at_5
value: 84.176
- type: precision_at_1
value: 78.2
- type: precision_at_10
value: 12.973
- type: precision_at_100
value: 1.5010000000000001
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.949999999999996
- type: precision_at_5
value: 23.62
- type: recall_at_1
value: 67.947
- type: recall_at_10
value: 93.804
- type: recall_at_100
value: 98.971
- type: recall_at_1000
value: 99.91600000000001
- type: recall_at_3
value: 84.75399999999999
- type: recall_at_5
value: 89.32
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 45.457201684255104
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.162226937477875
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.173
- type: map_at_10
value: 10.463000000000001
- type: map_at_100
value: 12.278
- type: map_at_1000
value: 12.572
- type: map_at_3
value: 7.528
- type: map_at_5
value: 8.863
- type: mrr_at_1
value: 20.599999999999998
- type: mrr_at_10
value: 30.422
- type: mrr_at_100
value: 31.6
- type: mrr_at_1000
value: 31.663000000000004
- type: mrr_at_3
value: 27.400000000000002
- type: mrr_at_5
value: 29.065
- type: ndcg_at_1
value: 20.599999999999998
- type: ndcg_at_10
value: 17.687
- type: ndcg_at_100
value: 25.172
- type: ndcg_at_1000
value: 30.617
- type: ndcg_at_3
value: 16.81
- type: ndcg_at_5
value: 14.499
- type: precision_at_1
value: 20.599999999999998
- type: precision_at_10
value: 9.17
- type: precision_at_100
value: 2.004
- type: precision_at_1000
value: 0.332
- type: precision_at_3
value: 15.6
- type: precision_at_5
value: 12.58
- type: recall_at_1
value: 4.173
- type: recall_at_10
value: 18.575
- type: recall_at_100
value: 40.692
- type: recall_at_1000
value: 67.467
- type: recall_at_3
value: 9.488000000000001
- type: recall_at_5
value: 12.738
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 81.12603499315416
- type: cos_sim_spearman
value: 73.62060290948378
- type: euclidean_pearson
value: 78.14083565781135
- type: euclidean_spearman
value: 73.16840437541543
- type: manhattan_pearson
value: 77.92017261109734
- type: manhattan_spearman
value: 72.8805059949965
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 79.75955377133172
- type: cos_sim_spearman
value: 71.8872633964069
- type: euclidean_pearson
value: 76.31922068538256
- type: euclidean_spearman
value: 70.86449661855376
- type: manhattan_pearson
value: 76.47852229730407
- type: manhattan_spearman
value: 70.99367421984789
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 78.80762722908158
- type: cos_sim_spearman
value: 79.84588978756372
- type: euclidean_pearson
value: 79.8216849781164
- type: euclidean_spearman
value: 80.22647061695481
- type: manhattan_pearson
value: 79.56604194112572
- type: manhattan_spearman
value: 79.96495189862462
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 80.1012718092742
- type: cos_sim_spearman
value: 76.86011381793661
- type: euclidean_pearson
value: 79.94426039862019
- type: euclidean_spearman
value: 77.36751135465131
- type: manhattan_pearson
value: 79.87959373304288
- type: manhattan_spearman
value: 77.37717129004746
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 83.90618420346104
- type: cos_sim_spearman
value: 84.77290791243722
- type: euclidean_pearson
value: 84.64732258073293
- type: euclidean_spearman
value: 85.21053649543357
- type: manhattan_pearson
value: 84.61616883522647
- type: manhattan_spearman
value: 85.19803126766931
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 80.52192114059063
- type: cos_sim_spearman
value: 81.9103244827937
- type: euclidean_pearson
value: 80.99375176138985
- type: euclidean_spearman
value: 81.540250641079
- type: manhattan_pearson
value: 80.84979573396426
- type: manhattan_spearman
value: 81.3742591621492
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.82166001234197
- type: cos_sim_spearman
value: 86.81857495659123
- type: euclidean_pearson
value: 85.72798403202849
- type: euclidean_spearman
value: 85.70482438950965
- type: manhattan_pearson
value: 85.51579093130357
- type: manhattan_spearman
value: 85.41233705379751
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.48071151079803
- type: cos_sim_spearman
value: 65.37838108084044
- type: euclidean_pearson
value: 64.67378947096257
- type: euclidean_spearman
value: 65.39187147219869
- type: manhattan_pearson
value: 65.35487466133208
- type: manhattan_spearman
value: 65.51328499442272
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.64702367823314
- type: cos_sim_spearman
value: 82.49732953181818
- type: euclidean_pearson
value: 83.05996062475664
- type: euclidean_spearman
value: 82.28159546751176
- type: manhattan_pearson
value: 82.98305503664952
- type: manhattan_spearman
value: 82.18405771943928
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.5744649318696
- type: mrr
value: 93.35386291268645
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.093999999999994
- type: map_at_10
value: 61.646
- type: map_at_100
value: 62.197
- type: map_at_1000
value: 62.22800000000001
- type: map_at_3
value: 58.411
- type: map_at_5
value: 60.585
- type: mrr_at_1
value: 55.00000000000001
- type: mrr_at_10
value: 62.690999999999995
- type: mrr_at_100
value: 63.139
- type: mrr_at_1000
value: 63.166999999999994
- type: mrr_at_3
value: 60.111000000000004
- type: mrr_at_5
value: 61.778
- type: ndcg_at_1
value: 55.00000000000001
- type: ndcg_at_10
value: 66.271
- type: ndcg_at_100
value: 68.879
- type: ndcg_at_1000
value: 69.722
- type: ndcg_at_3
value: 60.672000000000004
- type: ndcg_at_5
value: 63.929
- type: precision_at_1
value: 55.00000000000001
- type: precision_at_10
value: 9
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 23.555999999999997
- type: precision_at_5
value: 16.2
- type: recall_at_1
value: 52.093999999999994
- type: recall_at_10
value: 79.567
- type: recall_at_100
value: 91.60000000000001
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 64.633
- type: recall_at_5
value: 72.68299999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.83267326732673
- type: cos_sim_ap
value: 95.77995366495178
- type: cos_sim_f1
value: 91.51180311401306
- type: cos_sim_precision
value: 91.92734611503532
- type: cos_sim_recall
value: 91.10000000000001
- type: dot_accuracy
value: 99.63366336633663
- type: dot_ap
value: 88.53996286967461
- type: dot_f1
value: 81.06537530266343
- type: dot_precision
value: 78.59154929577464
- type: dot_recall
value: 83.7
- type: euclidean_accuracy
value: 99.82376237623762
- type: euclidean_ap
value: 95.53192209281187
- type: euclidean_f1
value: 91.19683481701286
- type: euclidean_precision
value: 90.21526418786692
- type: euclidean_recall
value: 92.2
- type: manhattan_accuracy
value: 99.82376237623762
- type: manhattan_ap
value: 95.55642082191741
- type: manhattan_f1
value: 91.16186693147964
- type: manhattan_precision
value: 90.53254437869822
- type: manhattan_recall
value: 91.8
- type: max_accuracy
value: 99.83267326732673
- type: max_ap
value: 95.77995366495178
- type: max_f1
value: 91.51180311401306
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 54.508462134213474
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.06549765184959
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.43129549466616
- type: mrr
value: 50.20613169510227
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.069516173193044
- type: cos_sim_spearman
value: 29.872498354017353
- type: dot_pearson
value: 28.80761257516063
- type: dot_spearman
value: 28.397422678527708
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.169
- type: map_at_10
value: 1.208
- type: map_at_100
value: 5.925
- type: map_at_1000
value: 14.427000000000001
- type: map_at_3
value: 0.457
- type: map_at_5
value: 0.716
- type: mrr_at_1
value: 64
- type: mrr_at_10
value: 74.075
- type: mrr_at_100
value: 74.303
- type: mrr_at_1000
value: 74.303
- type: mrr_at_3
value: 71
- type: mrr_at_5
value: 72.89999999999999
- type: ndcg_at_1
value: 57.99999999999999
- type: ndcg_at_10
value: 50.376
- type: ndcg_at_100
value: 38.582
- type: ndcg_at_1000
value: 35.663
- type: ndcg_at_3
value: 55.592
- type: ndcg_at_5
value: 53.647999999999996
- type: precision_at_1
value: 64
- type: precision_at_10
value: 53.2
- type: precision_at_100
value: 39.6
- type: precision_at_1000
value: 16.218
- type: precision_at_3
value: 59.333000000000006
- type: precision_at_5
value: 57.599999999999994
- type: recall_at_1
value: 0.169
- type: recall_at_10
value: 1.423
- type: recall_at_100
value: 9.049999999999999
- type: recall_at_1000
value: 34.056999999999995
- type: recall_at_3
value: 0.48700000000000004
- type: recall_at_5
value: 0.792
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.319
- type: map_at_10
value: 7.112
- type: map_at_100
value: 12.588
- type: map_at_1000
value: 14.056
- type: map_at_3
value: 2.8049999999999997
- type: map_at_5
value: 4.68
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 33.94
- type: mrr_at_100
value: 35.193000000000005
- type: mrr_at_1000
value: 35.193000000000005
- type: mrr_at_3
value: 29.932
- type: mrr_at_5
value: 32.279
- type: ndcg_at_1
value: 15.306000000000001
- type: ndcg_at_10
value: 18.096
- type: ndcg_at_100
value: 30.512
- type: ndcg_at_1000
value: 42.148
- type: ndcg_at_3
value: 17.034
- type: ndcg_at_5
value: 18.509
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 18.776
- type: precision_at_100
value: 7.02
- type: precision_at_1000
value: 1.467
- type: precision_at_3
value: 19.048000000000002
- type: precision_at_5
value: 22.041
- type: recall_at_1
value: 1.319
- type: recall_at_10
value: 13.748
- type: recall_at_100
value: 43.972
- type: recall_at_1000
value: 79.557
- type: recall_at_3
value: 4.042
- type: recall_at_5
value: 7.742
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.2282
- type: ap
value: 13.995763859570426
- type: f1
value: 54.08126256731344
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 57.64006791171477
- type: f1
value: 57.95841320748957
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 40.19267841788564
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.96614412588663
- type: cos_sim_ap
value: 67.75985678572738
- type: cos_sim_f1
value: 64.04661542276222
- type: cos_sim_precision
value: 60.406922357343305
- type: cos_sim_recall
value: 68.15303430079156
- type: dot_accuracy
value: 79.5732252488526
- type: dot_ap
value: 51.30562107572645
- type: dot_f1
value: 53.120759837177744
- type: dot_precision
value: 46.478037198258804
- type: dot_recall
value: 61.97889182058047
- type: euclidean_accuracy
value: 84.00786791440663
- type: euclidean_ap
value: 67.58930214486998
- type: euclidean_f1
value: 64.424821579775
- type: euclidean_precision
value: 59.4817958454322
- type: euclidean_recall
value: 70.26385224274406
- type: manhattan_accuracy
value: 83.87673600762949
- type: manhattan_ap
value: 67.4250981523309
- type: manhattan_f1
value: 64.10286658015808
- type: manhattan_precision
value: 57.96885001066781
- type: manhattan_recall
value: 71.68865435356201
- type: max_accuracy
value: 84.00786791440663
- type: max_ap
value: 67.75985678572738
- type: max_f1
value: 64.424821579775
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.41347459929368
- type: cos_sim_ap
value: 84.89261930113058
- type: cos_sim_f1
value: 77.13677607258877
- type: cos_sim_precision
value: 74.88581164358733
- type: cos_sim_recall
value: 79.52725592854944
- type: dot_accuracy
value: 86.32359219156285
- type: dot_ap
value: 79.29794992131094
- type: dot_f1
value: 72.84356337679777
- type: dot_precision
value: 67.31761478675462
- type: dot_recall
value: 79.35786880197105
- type: euclidean_accuracy
value: 88.33585593976791
- type: euclidean_ap
value: 84.73257641312746
- type: euclidean_f1
value: 76.83529582788195
- type: euclidean_precision
value: 72.76294052863436
- type: euclidean_recall
value: 81.3905143209116
- type: manhattan_accuracy
value: 88.3086894089339
- type: manhattan_ap
value: 84.66304891729399
- type: manhattan_f1
value: 76.8181650632165
- type: manhattan_precision
value: 73.6864436744219
- type: manhattan_recall
value: 80.22790267939637
- type: max_accuracy
value: 88.41347459929368
- type: max_ap
value: 84.89261930113058
- type: max_f1
value: 77.13677607258877
---
# Plasmoxy/bge-micro-v2-Q4_K_M-GGUF
Really small BGE embedding model but with 4-bit gguf quant.
This model was converted to GGUF format from [`TaylorAI/bge-micro-v2`](https://huggingface.co/TaylorAI/bge-micro-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TaylorAI/bge-micro-v2) for more details on the model.
**!!! IMPORTANT !!! - context size is 512, specify the context size (-c 512) for llama cpp.**
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Plasmoxy/bge-micro-v2-Q4_K_M-GGUF --hf-file bge-micro-v2-q4_k_m.gguf -c 512 -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Plasmoxy/bge-micro-v2-Q4_K_M-GGUF --hf-file bge-micro-v2-q4_k_m.gguf -c 512
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Plasmoxy/bge-micro-v2-Q4_K_M-GGUF --hf-file bge-micro-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Plasmoxy/bge-micro-v2-Q4_K_M-GGUF --hf-file bge-micro-v2-q4_k_m.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
blockblockblock/Dark-Miqu-70B-bpw2.25-exl2 | blockblockblock | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.19522",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | 1,715 | 1,715 | 5 | 0 | ---
license: other
---

***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).*
A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere!
This model **excels** at writing Dark/Grimdark fantasy (see examples below).
# Model background
Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0).
This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted.
The model was created in two stages:
- First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results.
- In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model.
# Prompting format
Vicuna format is preferred:
```
USER: {prompt} ASSISTANT:
```
Mistral and Alpaca formats are also supported:
```
[INST] {prompt} [/INST]
```
```
### Instruction:
{prompt}
### Response:
```
# Licence and usage restrictions
[miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only.
# Mergekit configuration
The following YAML configuration was used to produce this model:
```yaml
name: midnight-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: sophosympatheia/Midnight-Rose-70B-v2.0.3
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: euryale-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/Euryale-1.3-L2-70B
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: winter-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/WinterGoddess-1.4x-70B-L2
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: dark-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: midnight-miqu-70b
- model: euryale-miqu-70b
- model: winter-miqu-70b
base_model: 152334H/miqu-1-70b-sf
merge_method: model_stock
dtype: float16
```
## Key configuration details:
- '`merge_method: slerp`' uses spherical linear interpolation for merging models.
- '`parameters: t`' controls the interpolation ratios between models.
- '`embed_slerp: true`' applies slerp to the embedding layers.
- '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method.
See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings.
**NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file.
# Example stories
The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0:
## Dark fantasy stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes
### ASSISTANT
Chapter One: The Crimson Dawn
The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks.
His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death.
Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces.
He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings.
Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close.
He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out.
He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live."
She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat."
She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun."
He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air.
"Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick."
She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend.
"I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!"
Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk."
He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape.
"Tell me," he said, his voice a whisper. "Tell me, and I'll end this."
She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve.
Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way.
He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head.
And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous.
Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him.
They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured.
In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain.
It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it.
He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive.
But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight.
He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost.
And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it.
It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows.
It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters.
It was a story that had only just begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst.
### ASSISTANT
The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat.
He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run.
Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life.
His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place.
As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start.
The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts.
He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced.
"Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away."
The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted.
He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets.
"What have I done?" he asked the mirror, but it gave no answer.
He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous.
He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his.
The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach.
But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time.
The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character.
### ASSISTANT
The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying.
He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair.
*Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?*
But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do.
He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart.
"Shh," he whispered, his voice a soothing balm. "It's over now."
The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here."
The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me."
As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name.
*Elara.*
The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end.
He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release.
As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield.
"Anyone here need my services?" he asked, his voice devoid of emotion.
One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it."
The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged.
*Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.*
He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance.
"No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly."
The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different.
"Why?" he asked, genuinely curious.
The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up."
The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on.
He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick."
The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening.
The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?*
But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation.
### ASSISTANT
The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole.
"What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?"
The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window.
"Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot.
The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered.
As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely?
He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies.
"You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice.
Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North."
The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom.
"What do you think it means?" he pressed, unable to keep the desperation at bay.
Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here."
The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine.
As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone.
"We have to get out of here," he croaked, his voice barely above a whisper.
Grimgor just grunted in agreement, his gaze never leaving the window.
The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment.
But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it.
"Grimgor," he hissed, tugging on the larger man's arm. "Look."
Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it.
The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe?
The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge.
"Who are you?" he demanded, trying to sound braver than he felt.
The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice.
"I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom."
With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell.
The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains.
"We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here."
The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown.
Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other.
"What now?" the Fox asked, his voice barely above a whisper.
Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine.
"Now," he said, "we find out what's really going on."
And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned.
</details>
## Dark sci-fi stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan.
### ASSISTANT
Chapter 1: Adrift
The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism.
In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford.
Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs.
She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel.
As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay.
Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills.
"Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now."
She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement.
"Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines."
Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper.
"They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart."
The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others.
"It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work."
Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!"
The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over?
As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them.
And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart.
As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive?
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in.
### ASSISTANT
Chapter 1: Ashes of Exodus
The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began.
As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago.
Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself.
But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth.
He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely.
Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online.
And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale.
His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps.
"What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear.
Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly.
Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened."
The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return.
As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived.
But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void.
He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next."
Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here."
Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft."
Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet.
As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened.
And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why?
The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought.
He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look."
The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be.
As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand.
But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one.
He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here...
### ASSISTANT
Chapter 1: The Biodome
The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet.
Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known.
"Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal."
He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland.
But today was not that day. Today, it was just another tomb waiting to be sealed.
As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked.
"What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?"
Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion.
"I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good."
Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead.
The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine.
"Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!"
Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake.
With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom.
"Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!"
But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness.
And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more.
When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones.
Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten.
Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard.
And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors.
But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile.
"Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity.
And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone.
### ASSISTANT
Chapter 1 - Whispers from the Void
The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos.
His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge.
A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean.
The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake.
Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger.
He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now.
As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again.
But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act.
The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe.
As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch.
But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home.
The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything.
What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence.
Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind.
He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here.
Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale.
</details>
## Miscellaneous stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today.
### ASSISTANT
In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash.
His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used.
The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations.
Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long?
He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes.
He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff.
Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news."
The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply.
"The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years."
Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?"
"We have enough for six, maybe seven if we ration carefully. But not twelve."
The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close.
"We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes."
Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival.
As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure.
In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now.
He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary.
Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking.
Samuel shook his head. "I don't know."
"We can't just let six of us die," she said, her voice rising in panic. "There has to be another way."
He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?"
Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up."
Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic."
She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic."
They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic."
Samuel raised an eyebrow. "What?"
"We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface."
He stared at her in disbelief. "You can't be serious."
"It's not ideal," she admitted. "But it's better than just letting them die down here."
Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have?
As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor.
What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls.
Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel.
And all he could do was watch as the world above claimed them, one by one.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods.
### ASSISTANT
Chapter One: The Golden Veil
The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune.
In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely.
His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything.
Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within.
Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner.
One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world.
"Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?"
The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion.
"No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me."
He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within.
"I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it."
The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box."
As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for.
The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness.
But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher.
Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones.
"I must see this box for myself," he said, his voice barely above a whisper. "Take me to it."
The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels.
There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity.
As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls.
A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe."
Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe.
"But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds."
And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean?
As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him!
He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety.
And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself.
But little did he know that the true mystery was only just beginning...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them.
### ASSISTANT
Chapter 1
The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live.
The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light.
I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line."
I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them.
A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth.
I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness.
When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming."
And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step.
I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there...
But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction.
"Halt!" one of them yelled, his voice harsh and commanding. "Hands up!"
I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him.
"Run," he whispered, his voice barely audible. "Leave me."
I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything.
But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat.
"Nein," I whispered, my voice choked with grief and rage. "Nein!"
I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us.
With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness.
As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me.
But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform.
### ASSISTANT
In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel.
His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work.
A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips.
The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface.
Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival.
"You're late," Chigurh said finally, his voice low and gravelly.
The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic."
Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement.
Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket.
"You know what needs to be done," the man in the suit said, his voice barely above a whisper.
Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress.
Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary.
He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose.
Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die.
He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction.
As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move.
Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw.
</details>
Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)! | [
"TRANSLATION"
] | [
"BEAR"
] | Non_BioNLP |
Omartificial-Intelligence-Space/Arabic-mpnet-base-all-nli-triplet | Omartificial-Intelligence-Space | sentence-similarity | [
"sentence-transformers",
"safetensors",
"mpnet",
"mteb",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"arxiv:2407.21139",
"base_model:tomaarsen/mpnet-base-all-nli-triplet",
"base_model:finetune:tomaarsen/mpnet-base-all-nli-triplet",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,718 | 1,737 | 50 | 10 | ---
base_model: tomaarsen/mpnet-base-all-nli-triplet
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
language:
- ar
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- mteb
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط
النظيفة
sentences:
- رجل يقدم عرضاً
- هناك رجل بالخارج قرب الشاطئ
- رجل يجلس على أريكه
- source_sentence: رجل يقفز إلى سريره القذر
sentences:
- السرير قذر.
- رجل يضحك أثناء غسيل الملابس
- الرجل على القمر
- source_sentence: الفتيات بالخارج
sentences:
- امرأة تلف الخيط إلى كرات بجانب كومة من الكرات
- فتيان يركبان في جولة متعة
- ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة تتحدث
إليهن
- source_sentence: الرجل يرتدي قميصاً أزرق.
sentences:
- رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة حمراء
مع الماء في الخلفية.
- كتاب القصص مفتوح
- رجل يرتدي قميص أسود يعزف على الجيتار.
- source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة
شابة.
sentences:
- ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه
- رجل يستلقي على وجهه على مقعد في الحديقة.
- الشاب نائم بينما الأم تقود ابنتها إلى الحديقة
model-index:
- name: SentenceTransformer based on tomaarsen/mpnet-base-all-nli-triplet
results:
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrieval (ar)
type: miracl/mmteb-miracl
config: ar
split: dev
revision: main
metrics:
- type: ndcg_at_1
value: 1.934
- type: ndcg_at_3
value: 2.461
- type: ndcg_at_5
value: 2.907
- type: ndcg_at_10
value: 3.581
- type: ndcg_at_20
value: 4.041
- type: ndcg_at_100
value: 5.669
- type: ndcg_at_1000
value: 8.247
- type: map_at_1
value: 1.298
- type: map_at_3
value: 1.974
- type: map_at_5
value: 2.236
- type: map_at_10
value: 2.503
- type: map_at_20
value: 2.6310000000000002
- type: map_at_100
value: 2.8529999999999998
- type: map_at_1000
value: 2.939
- type: recall_at_1
value: 1.298
- type: recall_at_3
value: 2.785
- type: recall_at_5
value: 3.878
- type: recall_at_10
value: 5.738
- type: recall_at_20
value: 7.2940000000000005
- type: recall_at_100
value: 14.999
- type: recall_at_1000
value: 33.268
- type: precision_at_1
value: 1.934
- type: precision_at_3
value: 1.485
- type: precision_at_5
value: 1.222
- type: precision_at_10
value: 0.9249999999999999
- type: precision_at_20
value: 0.608
- type: precision_at_100
value: 0.263
- type: precision_at_1000
value: 0.061
- type: mrr_at_1
value: 1.9337
- type: mrr_at_3
value: 2.9236
- type: mrr_at_5
value: 3.2361
- type: mrr_at_10
value: 3.5991000000000004
- type: mrr_at_20
value: 3.7424
- type: mrr_at_100
value: 3.9737
- type: mrr_at_1000
value: 4.0521
- type: nauc_ndcg_at_1_max
value: 18.7293
- type: nauc_ndcg_at_1_std
value: -22.227
- type: nauc_ndcg_at_1_diff1
value: 53.751099999999994
- type: nauc_ndcg_at_3_max
value: 13.960700000000001
- type: nauc_ndcg_at_3_std
value: -19.653100000000002
- type: nauc_ndcg_at_3_diff1
value: 39.860800000000005
- type: nauc_ndcg_at_5_max
value: 12.2772
- type: nauc_ndcg_at_5_std
value: -19.7249
- type: nauc_ndcg_at_5_diff1
value: 35.011199999999995
- type: nauc_ndcg_at_10_max
value: 9.7866
- type: nauc_ndcg_at_10_std
value: -19.2077
- type: nauc_ndcg_at_10_diff1
value: 29.893900000000002
- type: nauc_ndcg_at_20_max
value: 8.677700000000002
- type: nauc_ndcg_at_20_std
value: -18.2092
- type: nauc_ndcg_at_20_diff1
value: 27.149800000000003
- type: nauc_ndcg_at_100_max
value: 8.693900000000001
- type: nauc_ndcg_at_100_std
value: -15.490100000000002
- type: nauc_ndcg_at_100_diff1
value: 22.0869
- type: nauc_ndcg_at_1000_max
value: 8.8565
- type: nauc_ndcg_at_1000_std
value: -14.285200000000001
- type: nauc_ndcg_at_1000_diff1
value: 19.5158
- type: nauc_map_at_1_max
value: 18.909100000000002
- type: nauc_map_at_1_std
value: -24.4301
- type: nauc_map_at_1_diff1
value: 60.7617
- type: nauc_map_at_3_max
value: 14.1068
- type: nauc_map_at_3_std
value: -21.1018
- type: nauc_map_at_3_diff1
value: 43.9158
- type: nauc_map_at_5_max
value: 13.1835
- type: nauc_map_at_5_std
value: -20.8493
- type: nauc_map_at_5_diff1
value: 39.895399999999995
- type: nauc_map_at_10_max
value: 11.8414
- type: nauc_map_at_10_std
value: -20.279
- type: nauc_map_at_10_diff1
value: 36.4339
- type: nauc_map_at_20_max
value: 11.1734
- type: nauc_map_at_20_std
value: -19.801299999999998
- type: nauc_map_at_20_diff1
value: 34.8787
- type: nauc_map_at_100_max
value: 11.018
- type: nauc_map_at_100_std
value: -19.1222
- type: nauc_map_at_100_diff1
value: 33.216699999999996
- type: nauc_map_at_1000_max
value: 11.120199999999999
- type: nauc_map_at_1000_std
value: -18.8841
- type: nauc_map_at_1000_diff1
value: 32.8634
- type: nauc_recall_at_1_max
value: 18.909100000000002
- type: nauc_recall_at_1_std
value: -24.4301
- type: nauc_recall_at_1_diff1
value: 60.7617
- type: nauc_recall_at_3_max
value: 11.9728
- type: nauc_recall_at_3_std
value: -18.6359
- type: nauc_recall_at_3_diff1
value: 35.7044
- type: nauc_recall_at_5_max
value: 9.5557
- type: nauc_recall_at_5_std
value: -18.8616
- type: nauc_recall_at_5_diff1
value: 27.9593
- type: nauc_recall_at_10_max
value: 5.581300000000001
- type: nauc_recall_at_10_std
value: -18.3274
- type: nauc_recall_at_10_diff1
value: 21.3123
- type: nauc_recall_at_20_max
value: 4.2211
- type: nauc_recall_at_20_std
value: -16.7507
- type: nauc_recall_at_20_diff1
value: 17.9617
- type: nauc_recall_at_100_max
value: 5.5294
- type: nauc_recall_at_100_std
value: -11.9885
- type: nauc_recall_at_100_diff1
value: 11.269
- type: nauc_recall_at_1000_max
value: 5.6486
- type: nauc_recall_at_1000_std
value: -11.1735
- type: nauc_recall_at_1000_diff1
value: 9.0209
- type: nauc_precision_at_1_max
value: 18.7293
- type: nauc_precision_at_1_std
value: -22.227
- type: nauc_precision_at_1_diff1
value: 53.751099999999994
- type: nauc_precision_at_3_max
value: 13.1207
- type: nauc_precision_at_3_std
value: -17.6116
- type: nauc_precision_at_3_diff1
value: 32.0242
- type: nauc_precision_at_5_max
value: 12.2403
- type: nauc_precision_at_5_std
value: -16.9403
- type: nauc_precision_at_5_diff1
value: 26.3656
- type: nauc_precision_at_10_max
value: 9.5427
- type: nauc_precision_at_10_std
value: -16.5917
- type: nauc_precision_at_10_diff1
value: 21.297
- type: nauc_precision_at_20_max
value: 8.2911
- type: nauc_precision_at_20_std
value: -14.3532
- type: nauc_precision_at_20_diff1
value: 17.999599999999997
- type: nauc_precision_at_100_max
value: 10.3474
- type: nauc_precision_at_100_std
value: -7.6601
- type: nauc_precision_at_100_diff1
value: 12.3374
- type: nauc_precision_at_1000_max
value: 10.9218
- type: nauc_precision_at_1000_std
value: -4.5216
- type: nauc_precision_at_1000_diff1
value: 8.4976
- type: nauc_mrr_at_1_max
value: 18.7293
- type: nauc_mrr_at_1_std
value: -22.227
- type: nauc_mrr_at_1_diff1
value: 53.751099999999994
- type: nauc_mrr_at_3_max
value: 14.973700000000001
- type: nauc_mrr_at_3_std
value: -19.781000000000002
- type: nauc_mrr_at_3_diff1
value: 39.7143
- type: nauc_mrr_at_5_max
value: 14.2562
- type: nauc_mrr_at_5_std
value: -19.3477
- type: nauc_mrr_at_5_diff1
value: 37.0654
- type: nauc_mrr_at_10_max
value: 12.6741
- type: nauc_mrr_at_10_std
value: -19.4737
- type: nauc_mrr_at_10_diff1
value: 34.4683
- type: nauc_mrr_at_20_max
value: 12.1728
- type: nauc_mrr_at_20_std
value: -19.186500000000002
- type: nauc_mrr_at_20_diff1
value: 33.287299999999995
- type: nauc_mrr_at_100_max
value: 11.9865
- type: nauc_mrr_at_100_std
value: -18.7337
- type: nauc_mrr_at_100_diff1
value: 32.0965
- type: nauc_mrr_at_1000_max
value: 11.9275
- type: nauc_mrr_at_1000_std
value: -18.6911
- type: nauc_mrr_at_1000_diff1
value: 31.8893
- type: main_score
value: 3.581
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrievalHardNegatives (ar)
type: mteb/miracl-hard-negatives
config: ar
split: dev
revision: 95c8db7d4a6e9c1d8a60601afd63d553ae20a2eb
metrics:
- type: ndcg_at_1
value: 3.2
- type: ndcg_at_3
value: 4.223
- type: ndcg_at_5
value: 4.941
- type: ndcg_at_10
value: 6.198
- type: ndcg_at_20
value: 7.405
- type: ndcg_at_100
value: 10.586
- type: ndcg_at_1000
value: 14.695
- type: map_at_1
value: 2.083
- type: map_at_3
value: 3.382
- type: map_at_5
value: 3.805
- type: map_at_10
value: 4.314
- type: map_at_20
value: 4.662
- type: map_at_100
value: 5.133
- type: map_at_1000
value: 5.288
- type: recall_at_1
value: 2.083
- type: recall_at_3
value: 4.941
- type: recall_at_5
value: 6.641
- type: recall_at_10
value: 9.998
- type: recall_at_20
value: 13.971
- type: recall_at_100
value: 28.610000000000003
- type: recall_at_1000
value: 56.98800000000001
- type: precision_at_1
value: 3.2
- type: precision_at_3
value: 2.4330000000000003
- type: precision_at_5
value: 2.02
- type: precision_at_10
value: 1.63
- type: precision_at_20
value: 1.23
- type: precision_at_100
value: 0.538
- type: precision_at_1000
value: 0.11
- type: mrr_at_1
value: 3.2
- type: mrr_at_3
value: 4.9167000000000005
- type: mrr_at_5
value: 5.4817
- type: mrr_at_10
value: 6.1372
- type: mrr_at_20
value: 6.4818
- type: mrr_at_100
value: 6.9077
- type: mrr_at_1000
value: 7.017900000000001
- type: nauc_ndcg_at_1_max
value: 7.5344999999999995
- type: nauc_ndcg_at_1_std
value: -17.3808
- type: nauc_ndcg_at_1_diff1
value: 23.0707
- type: nauc_ndcg_at_3_max
value: 9.2206
- type: nauc_ndcg_at_3_std
value: -12.559400000000002
- type: nauc_ndcg_at_3_diff1
value: 16.543
- type: nauc_ndcg_at_5_max
value: 7.2911
- type: nauc_ndcg_at_5_std
value: -13.4758
- type: nauc_ndcg_at_5_diff1
value: 15.2764
- type: nauc_ndcg_at_10_max
value: 5.4578
- type: nauc_ndcg_at_10_std
value: -14.1635
- type: nauc_ndcg_at_10_diff1
value: 13.047900000000002
- type: nauc_ndcg_at_20_max
value: 7.0633
- type: nauc_ndcg_at_20_std
value: -12.3854
- type: nauc_ndcg_at_20_diff1
value: 11.6855
- type: nauc_ndcg_at_100_max
value: 10.4362
- type: nauc_ndcg_at_100_std
value: -9.9392
- type: nauc_ndcg_at_100_diff1
value: 11.9351
- type: nauc_ndcg_at_1000_max
value: 11.5675
- type: nauc_ndcg_at_1000_std
value: -8.5511
- type: nauc_ndcg_at_1000_diff1
value: 12.418
- type: nauc_map_at_1_max
value: 8.729199999999999
- type: nauc_map_at_1_std
value: -22.5749
- type: nauc_map_at_1_diff1
value: 24.7528
- type: nauc_map_at_3_max
value: 8.6757
- type: nauc_map_at_3_std
value: -14.871899999999998
- type: nauc_map_at_3_diff1
value: 17.5986
- type: nauc_map_at_5_max
value: 7.725999999999999
- type: nauc_map_at_5_std
value: -14.5548
- type: nauc_map_at_5_diff1
value: 16.54
- type: nauc_map_at_10_max
value: 6.399000000000001
- type: nauc_map_at_10_std
value: -14.7618
- type: nauc_map_at_10_diff1
value: 14.735500000000002
- type: nauc_map_at_20_max
value: 6.9674
- type: nauc_map_at_20_std
value: -14.211099999999998
- type: nauc_map_at_20_diff1
value: 14.294599999999999
- type: nauc_map_at_100_max
value: 8.024000000000001
- type: nauc_map_at_100_std
value: -13.2243
- type: nauc_map_at_100_diff1
value: 14.1314
- type: nauc_map_at_1000_max
value: 8.1127
- type: nauc_map_at_1000_std
value: -13.014500000000002
- type: nauc_map_at_1000_diff1
value: 14.1036
- type: nauc_recall_at_1_max
value: 8.729199999999999
- type: nauc_recall_at_1_std
value: -22.5749
- type: nauc_recall_at_1_diff1
value: 24.7528
- type: nauc_recall_at_3_max
value: 9.558800000000002
- type: nauc_recall_at_3_std
value: -10.4583
- type: nauc_recall_at_3_diff1
value: 14.2197
- type: nauc_recall_at_5_max
value: 6.5597
- type: nauc_recall_at_5_std
value: -12.167200000000001
- type: nauc_recall_at_5_diff1
value: 13.283900000000001
- type: nauc_recall_at_10_max
value: 2.7824
- type: nauc_recall_at_10_std
value: -13.879800000000001
- type: nauc_recall_at_10_diff1
value: 9.4774
- type: nauc_recall_at_20_max
value: 5.9161
- type: nauc_recall_at_20_std
value: -10.937
- type: nauc_recall_at_20_diff1
value: 7.096900000000001
- type: nauc_recall_at_100_max
value: 12.2712
- type: nauc_recall_at_100_std
value: -7.2211
- type: nauc_recall_at_100_diff1
value: 7.9826999999999995
- type: nauc_recall_at_1000_max
value: 16.5037
- type: nauc_recall_at_1000_std
value: -3.8615999999999997
- type: nauc_recall_at_1000_diff1
value: 10.1532
- type: nauc_precision_at_1_max
value: 7.5344999999999995
- type: nauc_precision_at_1_std
value: -17.3808
- type: nauc_precision_at_1_diff1
value: 23.0707
- type: nauc_precision_at_3_max
value: 8.8492
- type: nauc_precision_at_3_std
value: -11.2959
- type: nauc_precision_at_3_diff1
value: 14.475999999999999
- type: nauc_precision_at_5_max
value: 6.7330000000000005
- type: nauc_precision_at_5_std
value: -11.0518
- type: nauc_precision_at_5_diff1
value: 11.148
- type: nauc_precision_at_10_max
value: 5.7345
- type: nauc_precision_at_10_std
value: -11.168899999999999
- type: nauc_precision_at_10_diff1
value: 10.2786
- type: nauc_precision_at_20_max
value: 10.4611
- type: nauc_precision_at_20_std
value: -5.3885000000000005
- type: nauc_precision_at_20_diff1
value: 9.0225
- type: nauc_precision_at_100_max
value: 16.0671
- type: nauc_precision_at_100_std
value: -0.5837
- type: nauc_precision_at_100_diff1
value: 12.506300000000001
- type: nauc_precision_at_1000_max
value: 13.394
- type: nauc_precision_at_1000_std
value: 2.2683
- type: nauc_precision_at_1000_diff1
value: 10.2308
- type: nauc_mrr_at_1_max
value: 7.5344999999999995
- type: nauc_mrr_at_1_std
value: -17.3808
- type: nauc_mrr_at_1_diff1
value: 23.0707
- type: nauc_mrr_at_3_max
value: 8.5063
- type: nauc_mrr_at_3_std
value: -13.3302
- type: nauc_mrr_at_3_diff1
value: 17.413999999999998
- type: nauc_mrr_at_5_max
value: 7.4507
- type: nauc_mrr_at_5_std
value: -14.0678
- type: nauc_mrr_at_5_diff1
value: 16.5774
- type: nauc_mrr_at_10_max
value: 7.17
- type: nauc_mrr_at_10_std
value: -14.1629
- type: nauc_mrr_at_10_diff1
value: 16.3169
- type: nauc_mrr_at_20_max
value: 7.558
- type: nauc_mrr_at_20_std
value: -13.3002
- type: nauc_mrr_at_20_diff1
value: 15.335299999999998
- type: nauc_mrr_at_100_max
value: 7.947500000000001
- type: nauc_mrr_at_100_std
value: -12.963099999999999
- type: nauc_mrr_at_100_diff1
value: 15.235399999999998
- type: nauc_mrr_at_1000_max
value: 7.9108
- type: nauc_mrr_at_1000_std
value: -12.954099999999999
- type: nauc_mrr_at_1000_diff1
value: 15.2051
- type: main_score
value: 6.198
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-ara)
type: facebook/mlqa
config: ara-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 23.985
- type: ndcg_at_3
value: 31.717000000000002
- type: ndcg_at_5
value: 34.439
- type: ndcg_at_10
value: 36.51
- type: ndcg_at_20
value: 38.442
- type: ndcg_at_100
value: 42.731
- type: ndcg_at_1000
value: 45.137
- type: map_at_1
value: 23.985
- type: map_at_3
value: 29.723
- type: map_at_5
value: 31.241000000000003
- type: map_at_10
value: 32.063
- type: map_at_20
value: 32.607
- type: map_at_100
value: 33.181
- type: map_at_1000
value: 33.278999999999996
- type: recall_at_1
value: 23.985
- type: recall_at_3
value: 37.524
- type: recall_at_5
value: 44.101
- type: recall_at_10
value: 50.67700000000001
- type: recall_at_20
value: 58.221000000000004
- type: recall_at_100
value: 81.625
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 23.985
- type: precision_at_3
value: 12.508
- type: precision_at_5
value: 8.82
- type: precision_at_10
value: 5.0680000000000005
- type: precision_at_20
value: 2.911
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 23.9845
- type: mrr_at_3
value: 29.7228
- type: mrr_at_5
value: 31.2411
- type: mrr_at_10
value: 32.0631
- type: mrr_at_20
value: 32.6073
- type: mrr_at_100
value: 33.1811
- type: mrr_at_1000
value: 33.2789
- type: nauc_ndcg_at_1_max
value: 55.551300000000005
- type: nauc_ndcg_at_1_std
value: 19.2389
- type: nauc_ndcg_at_1_diff1
value: 46.3359
- type: nauc_ndcg_at_3_max
value: 54.64790000000001
- type: nauc_ndcg_at_3_std
value: 20.7714
- type: nauc_ndcg_at_3_diff1
value: 39.2472
- type: nauc_ndcg_at_5_max
value: 52.9641
- type: nauc_ndcg_at_5_std
value: 20.366500000000002
- type: nauc_ndcg_at_5_diff1
value: 38.1887
- type: nauc_ndcg_at_10_max
value: 52.8637
- type: nauc_ndcg_at_10_std
value: 20.069200000000002
- type: nauc_ndcg_at_10_diff1
value: 37.0473
- type: nauc_ndcg_at_20_max
value: 51.578900000000004
- type: nauc_ndcg_at_20_std
value: 19.564500000000002
- type: nauc_ndcg_at_20_diff1
value: 34.5057
- type: nauc_ndcg_at_100_max
value: 52.6159
- type: nauc_ndcg_at_100_std
value: 20.3172
- type: nauc_ndcg_at_100_diff1
value: 35.578199999999995
- type: nauc_ndcg_at_1000_max
value: 53.1581
- type: nauc_ndcg_at_1000_std
value: 20.188
- type: nauc_ndcg_at_1000_diff1
value: 37.285000000000004
- type: nauc_map_at_1_max
value: 55.551300000000005
- type: nauc_map_at_1_std
value: 19.2389
- type: nauc_map_at_1_diff1
value: 46.3359
- type: nauc_map_at_3_max
value: 55.1118
- type: nauc_map_at_3_std
value: 20.3289
- type: nauc_map_at_3_diff1
value: 40.842
- type: nauc_map_at_5_max
value: 54.1547
- type: nauc_map_at_5_std
value: 20.0975
- type: nauc_map_at_5_diff1
value: 40.2913
- type: nauc_map_at_10_max
value: 54.173
- type: nauc_map_at_10_std
value: 20.0246
- type: nauc_map_at_10_diff1
value: 39.8307
- type: nauc_map_at_20_max
value: 53.797799999999995
- type: nauc_map_at_20_std
value: 19.8761
- type: nauc_map_at_20_diff1
value: 39.1152
- type: nauc_map_at_100_max
value: 53.957699999999996
- type: nauc_map_at_100_std
value: 20.0471
- type: nauc_map_at_100_diff1
value: 39.260600000000004
- type: nauc_map_at_1000_max
value: 53.982200000000006
- type: nauc_map_at_1000_std
value: 20.0435
- type: nauc_map_at_1000_diff1
value: 39.334
- type: nauc_recall_at_1_max
value: 55.551300000000005
- type: nauc_recall_at_1_std
value: 19.2389
- type: nauc_recall_at_1_diff1
value: 46.3359
- type: nauc_recall_at_3_max
value: 53.303
- type: nauc_recall_at_3_std
value: 21.9959
- type: nauc_recall_at_3_diff1
value: 34.9686
- type: nauc_recall_at_5_max
value: 49.437599999999996
- type: nauc_recall_at_5_std
value: 21.0745
- type: nauc_recall_at_5_diff1
value: 32.3358
- type: nauc_recall_at_10_max
value: 48.7626
- type: nauc_recall_at_10_std
value: 19.9455
- type: nauc_recall_at_10_diff1
value: 28.7268
- type: nauc_recall_at_20_max
value: 43.4219
- type: nauc_recall_at_20_std
value: 17.959600000000002
- type: nauc_recall_at_20_diff1
value: 17.9683
- type: nauc_recall_at_100_max
value: 46.079
- type: nauc_recall_at_100_std
value: 22.0524
- type: nauc_recall_at_100_diff1
value: 14.742099999999999
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 55.551300000000005
- type: nauc_precision_at_1_std
value: 19.2389
- type: nauc_precision_at_1_diff1
value: 46.3359
- type: nauc_precision_at_3_max
value: 53.303
- type: nauc_precision_at_3_std
value: 21.9959
- type: nauc_precision_at_3_diff1
value: 34.9686
- type: nauc_precision_at_5_max
value: 49.437599999999996
- type: nauc_precision_at_5_std
value: 21.0745
- type: nauc_precision_at_5_diff1
value: 32.3358
- type: nauc_precision_at_10_max
value: 48.7626
- type: nauc_precision_at_10_std
value: 19.9455
- type: nauc_precision_at_10_diff1
value: 28.7268
- type: nauc_precision_at_20_max
value: 43.4219
- type: nauc_precision_at_20_std
value: 17.959600000000002
- type: nauc_precision_at_20_diff1
value: 17.9683
- type: nauc_precision_at_100_max
value: 46.079
- type: nauc_precision_at_100_std
value: 22.0524
- type: nauc_precision_at_100_diff1
value: 14.742099999999999
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 55.551300000000005
- type: nauc_mrr_at_1_std
value: 19.2389
- type: nauc_mrr_at_1_diff1
value: 46.3359
- type: nauc_mrr_at_3_max
value: 55.1118
- type: nauc_mrr_at_3_std
value: 20.3289
- type: nauc_mrr_at_3_diff1
value: 40.842
- type: nauc_mrr_at_5_max
value: 54.1547
- type: nauc_mrr_at_5_std
value: 20.0975
- type: nauc_mrr_at_5_diff1
value: 40.2913
- type: nauc_mrr_at_10_max
value: 54.173
- type: nauc_mrr_at_10_std
value: 20.0246
- type: nauc_mrr_at_10_diff1
value: 39.8307
- type: nauc_mrr_at_20_max
value: 53.797799999999995
- type: nauc_mrr_at_20_std
value: 19.8761
- type: nauc_mrr_at_20_diff1
value: 39.1152
- type: nauc_mrr_at_100_max
value: 53.957699999999996
- type: nauc_mrr_at_100_std
value: 20.0471
- type: nauc_mrr_at_100_diff1
value: 39.260600000000004
- type: nauc_mrr_at_1000_max
value: 53.982200000000006
- type: nauc_mrr_at_1000_std
value: 20.0435
- type: nauc_mrr_at_1000_diff1
value: 39.334
- type: main_score
value: 36.51
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-deu)
type: facebook/mlqa
config: ara-deu
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.483
- type: ndcg_at_3
value: 1.9959999999999998
- type: ndcg_at_5
value: 2.391
- type: ndcg_at_10
value: 3.143
- type: ndcg_at_20
value: 5.194
- type: ndcg_at_100
value: 13.254
- type: ndcg_at_1000
value: 18.717
- type: map_at_1
value: 0.483
- type: map_at_3
value: 1.53
- type: map_at_5
value: 1.7469999999999999
- type: map_at_10
value: 2.041
- type: map_at_20
value: 2.5919999999999996
- type: map_at_100
value: 3.5090000000000003
- type: map_at_1000
value: 3.8
- type: recall_at_1
value: 0.483
- type: recall_at_3
value: 3.382
- type: recall_at_5
value: 4.348
- type: recall_at_10
value: 6.763
- type: recall_at_20
value: 14.976
- type: recall_at_100
value: 61.353
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.483
- type: precision_at_3
value: 1.127
- type: precision_at_5
value: 0.8699999999999999
- type: precision_at_10
value: 0.676
- type: precision_at_20
value: 0.749
- type: precision_at_100
value: 0.614
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.48310000000000003
- type: mrr_at_3
value: 1.5298
- type: mrr_at_5
value: 1.7472
- type: mrr_at_10
value: 2.0409
- type: mrr_at_20
value: 2.5922
- type: mrr_at_100
value: 3.5095
- type: mrr_at_1000
value: 3.8004000000000002
- type: nauc_ndcg_at_1_max
value: -41.553000000000004
- type: nauc_ndcg_at_1_std
value: -41.553000000000004
- type: nauc_ndcg_at_1_diff1
value: -57.523500000000006
- type: nauc_ndcg_at_3_max
value: -44.262
- type: nauc_ndcg_at_3_std
value: -41.594300000000004
- type: nauc_ndcg_at_3_diff1
value: -33.6751
- type: nauc_ndcg_at_5_max
value: -42.9736
- type: nauc_ndcg_at_5_std
value: -42.2472
- type: nauc_ndcg_at_5_diff1
value: -33.2173
- type: nauc_ndcg_at_10_max
value: -31.821700000000003
- type: nauc_ndcg_at_10_std
value: -36.0429
- type: nauc_ndcg_at_10_diff1
value: -19.7423
- type: nauc_ndcg_at_20_max
value: -19.906
- type: nauc_ndcg_at_20_std
value: -25.389200000000002
- type: nauc_ndcg_at_20_diff1
value: -12.357899999999999
- type: nauc_ndcg_at_100_max
value: -14.87
- type: nauc_ndcg_at_100_std
value: -15.4838
- type: nauc_ndcg_at_100_diff1
value: -10.3397
- type: nauc_ndcg_at_1000_max
value: -22.5591
- type: nauc_ndcg_at_1000_std
value: -24.8202
- type: nauc_ndcg_at_1000_diff1
value: -15.3685
- type: nauc_map_at_1_max
value: -41.553000000000004
- type: nauc_map_at_1_std
value: -41.553000000000004
- type: nauc_map_at_1_diff1
value: -57.523500000000006
- type: nauc_map_at_3_max
value: -44.3092
- type: nauc_map_at_3_std
value: -41.9893
- type: nauc_map_at_3_diff1
value: -35.857499999999995
- type: nauc_map_at_5_max
value: -43.298500000000004
- type: nauc_map_at_5_std
value: -42.4017
- type: nauc_map_at_5_diff1
value: -35.0605
- type: nauc_map_at_10_max
value: -37.1022
- type: nauc_map_at_10_std
value: -38.9588
- type: nauc_map_at_10_diff1
value: -26.5455
- type: nauc_map_at_20_max
value: -30.0711
- type: nauc_map_at_20_std
value: -33.1179
- type: nauc_map_at_20_diff1
value: -21.5666
- type: nauc_map_at_100_max
value: -27.4023
- type: nauc_map_at_100_std
value: -29.2105
- type: nauc_map_at_100_diff1
value: -19.9454
- type: nauc_map_at_1000_max
value: -28.6252
- type: nauc_map_at_1000_std
value: -30.6047
- type: nauc_map_at_1000_diff1
value: -20.8378
- type: nauc_recall_at_1_max
value: -41.553000000000004
- type: nauc_recall_at_1_std
value: -41.553000000000004
- type: nauc_recall_at_1_diff1
value: -57.523500000000006
- type: nauc_recall_at_3_max
value: -44.1529
- type: nauc_recall_at_3_std
value: -41.004400000000004
- type: nauc_recall_at_3_diff1
value: -30.7575
- type: nauc_recall_at_5_max
value: -42.5017
- type: nauc_recall_at_5_std
value: -42.0639
- type: nauc_recall_at_5_diff1
value: -31.0911
- type: nauc_recall_at_10_max
value: -25.1079
- type: nauc_recall_at_10_std
value: -32.359
- type: nauc_recall_at_10_diff1
value: -11.9862
- type: nauc_recall_at_20_max
value: -11.081199999999999
- type: nauc_recall_at_20_std
value: -18.5217
- type: nauc_recall_at_20_diff1
value: -5.0226
- type: nauc_recall_at_100_max
value: -5.0011
- type: nauc_recall_at_100_std
value: -3.3889000000000005
- type: nauc_recall_at_100_diff1
value: -3.9987000000000004
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: -41.553000000000004
- type: nauc_precision_at_1_std
value: -41.553000000000004
- type: nauc_precision_at_1_diff1
value: -57.523500000000006
- type: nauc_precision_at_3_max
value: -44.1529
- type: nauc_precision_at_3_std
value: -41.004400000000004
- type: nauc_precision_at_3_diff1
value: -30.7575
- type: nauc_precision_at_5_max
value: -42.5017
- type: nauc_precision_at_5_std
value: -42.0639
- type: nauc_precision_at_5_diff1
value: -31.0911
- type: nauc_precision_at_10_max
value: -25.1079
- type: nauc_precision_at_10_std
value: -32.359
- type: nauc_precision_at_10_diff1
value: -11.9862
- type: nauc_precision_at_20_max
value: -11.081199999999999
- type: nauc_precision_at_20_std
value: -18.5217
- type: nauc_precision_at_20_diff1
value: -5.0226
- type: nauc_precision_at_100_max
value: -5.0011
- type: nauc_precision_at_100_std
value: -3.3889000000000005
- type: nauc_precision_at_100_diff1
value: -3.9987000000000004
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: -41.553000000000004
- type: nauc_mrr_at_1_std
value: -41.553000000000004
- type: nauc_mrr_at_1_diff1
value: -57.523500000000006
- type: nauc_mrr_at_3_max
value: -44.3092
- type: nauc_mrr_at_3_std
value: -41.9893
- type: nauc_mrr_at_3_diff1
value: -35.857499999999995
- type: nauc_mrr_at_5_max
value: -43.298500000000004
- type: nauc_mrr_at_5_std
value: -42.4017
- type: nauc_mrr_at_5_diff1
value: -35.0605
- type: nauc_mrr_at_10_max
value: -37.1022
- type: nauc_mrr_at_10_std
value: -38.9588
- type: nauc_mrr_at_10_diff1
value: -26.5455
- type: nauc_mrr_at_20_max
value: -30.0711
- type: nauc_mrr_at_20_std
value: -33.1179
- type: nauc_mrr_at_20_diff1
value: -21.5666
- type: nauc_mrr_at_100_max
value: -27.4023
- type: nauc_mrr_at_100_std
value: -29.2105
- type: nauc_mrr_at_100_diff1
value: -19.9454
- type: nauc_mrr_at_1000_max
value: -28.6252
- type: nauc_mrr_at_1000_std
value: -30.6047
- type: nauc_mrr_at_1000_diff1
value: -20.8378
- type: main_score
value: 3.143
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-eng)
type: facebook/mlqa
config: ara-eng
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.387
- type: ndcg_at_3
value: 0.799
- type: ndcg_at_5
value: 1.107
- type: ndcg_at_10
value: 1.8950000000000002
- type: ndcg_at_20
value: 2.491
- type: ndcg_at_100
value: 6.7250000000000005
- type: ndcg_at_1000
value: 15.473999999999998
- type: map_at_1
value: 0.387
- type: map_at_3
value: 0.677
- type: map_at_5
value: 0.8410000000000001
- type: map_at_10
value: 1.1520000000000001
- type: map_at_20
value: 1.32
- type: map_at_100
value: 1.82
- type: map_at_1000
value: 2.129
- type: recall_at_1
value: 0.387
- type: recall_at_3
value: 1.161
- type: recall_at_5
value: 1.934
- type: recall_at_10
value: 4.449
- type: recall_at_20
value: 6.77
- type: recall_at_100
value: 30.947999999999997
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.387
- type: precision_at_3
value: 0.387
- type: precision_at_5
value: 0.387
- type: precision_at_10
value: 0.445
- type: precision_at_20
value: 0.338
- type: precision_at_100
value: 0.309
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.3868
- type: mrr_at_3
value: 0.677
- type: mrr_at_5
value: 0.8413999999999999
- type: mrr_at_10
value: 1.1516
- type: mrr_at_20
value: 1.3199
- type: mrr_at_100
value: 1.8199
- type: mrr_at_1000
value: 2.1289
- type: nauc_ndcg_at_1_max
value: 46.4561
- type: nauc_ndcg_at_1_std
value: -32.306200000000004
- type: nauc_ndcg_at_1_diff1
value: 4.4164
- type: nauc_ndcg_at_3_max
value: 21.7988
- type: nauc_ndcg_at_3_std
value: 9.9137
- type: nauc_ndcg_at_3_diff1
value: 31.1407
- type: nauc_ndcg_at_5_max
value: 11.1279
- type: nauc_ndcg_at_5_std
value: 11.2983
- type: nauc_ndcg_at_5_diff1
value: 11.506
- type: nauc_ndcg_at_10_max
value: 13.262199999999998
- type: nauc_ndcg_at_10_std
value: 11.3881
- type: nauc_ndcg_at_10_diff1
value: 8.228100000000001
- type: nauc_ndcg_at_20_max
value: 5.5699
- type: nauc_ndcg_at_20_std
value: 9.5456
- type: nauc_ndcg_at_20_diff1
value: 1.0035
- type: nauc_ndcg_at_100_max
value: 12.0172
- type: nauc_ndcg_at_100_std
value: 14.402999999999999
- type: nauc_ndcg_at_100_diff1
value: -3.5281
- type: nauc_ndcg_at_1000_max
value: 10.545
- type: nauc_ndcg_at_1000_std
value: 12.3847
- type: nauc_ndcg_at_1000_diff1
value: -1.6625999999999999
- type: nauc_map_at_1_max
value: 46.4561
- type: nauc_map_at_1_std
value: -32.306200000000004
- type: nauc_map_at_1_diff1
value: 4.4164
- type: nauc_map_at_3_max
value: 24.696299999999997
- type: nauc_map_at_3_std
value: 1.8696000000000002
- type: nauc_map_at_3_diff1
value: 26.0786
- type: nauc_map_at_5_max
value: 16.475
- type: nauc_map_at_5_std
value: 3.9592
- type: nauc_map_at_5_diff1
value: 13.389499999999998
- type: nauc_map_at_10_max
value: 16.2084
- type: nauc_map_at_10_std
value: 5.8298000000000005
- type: nauc_map_at_10_diff1
value: 10.8911
- type: nauc_map_at_20_max
value: 11.9237
- type: nauc_map_at_20_std
value: 5.7805
- type: nauc_map_at_20_diff1
value: 6.8079
- type: nauc_map_at_100_max
value: 12.779399999999999
- type: nauc_map_at_100_std
value: 8.5426
- type: nauc_map_at_100_diff1
value: 3.11
- type: nauc_map_at_1000_max
value: 12.587200000000001
- type: nauc_map_at_1000_std
value: 8.2159
- type: nauc_map_at_1000_diff1
value: 3.3531
- type: nauc_recall_at_1_max
value: 46.4561
- type: nauc_recall_at_1_std
value: -32.306200000000004
- type: nauc_recall_at_1_diff1
value: 4.4164
- type: nauc_recall_at_3_max
value: 17.041600000000003
- type: nauc_recall_at_3_std
value: 23.9913
- type: nauc_recall_at_3_diff1
value: 39.9943
- type: nauc_recall_at_5_max
value: 3.8781000000000003
- type: nauc_recall_at_5_std
value: 21.1723
- type: nauc_recall_at_5_diff1
value: 7.9961
- type: nauc_recall_at_10_max
value: 11.1446
- type: nauc_recall_at_10_std
value: 15.9162
- type: nauc_recall_at_10_diff1
value: 5.334
- type: nauc_recall_at_20_max
value: 0.585
- type: nauc_recall_at_20_std
value: 11.422799999999999
- type: nauc_recall_at_20_diff1
value: -4.172
- type: nauc_recall_at_100_max
value: 13.1038
- type: nauc_recall_at_100_std
value: 16.5849
- type: nauc_recall_at_100_diff1
value: -5.8172
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 46.4561
- type: nauc_precision_at_1_std
value: -32.306200000000004
- type: nauc_precision_at_1_diff1
value: 4.4164
- type: nauc_precision_at_3_max
value: 17.041600000000003
- type: nauc_precision_at_3_std
value: 23.9913
- type: nauc_precision_at_3_diff1
value: 39.9943
- type: nauc_precision_at_5_max
value: 3.8781000000000003
- type: nauc_precision_at_5_std
value: 21.1723
- type: nauc_precision_at_5_diff1
value: 7.9961
- type: nauc_precision_at_10_max
value: 11.1446
- type: nauc_precision_at_10_std
value: 15.9162
- type: nauc_precision_at_10_diff1
value: 5.334
- type: nauc_precision_at_20_max
value: 0.585
- type: nauc_precision_at_20_std
value: 11.422799999999999
- type: nauc_precision_at_20_diff1
value: -4.172
- type: nauc_precision_at_100_max
value: 13.1038
- type: nauc_precision_at_100_std
value: 16.5849
- type: nauc_precision_at_100_diff1
value: -5.8172
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 46.4561
- type: nauc_mrr_at_1_std
value: -32.306200000000004
- type: nauc_mrr_at_1_diff1
value: 4.4164
- type: nauc_mrr_at_3_max
value: 24.696299999999997
- type: nauc_mrr_at_3_std
value: 1.8696000000000002
- type: nauc_mrr_at_3_diff1
value: 26.0786
- type: nauc_mrr_at_5_max
value: 16.475
- type: nauc_mrr_at_5_std
value: 3.9592
- type: nauc_mrr_at_5_diff1
value: 13.389499999999998
- type: nauc_mrr_at_10_max
value: 16.2084
- type: nauc_mrr_at_10_std
value: 5.8298000000000005
- type: nauc_mrr_at_10_diff1
value: 10.8911
- type: nauc_mrr_at_20_max
value: 11.9237
- type: nauc_mrr_at_20_std
value: 5.7805
- type: nauc_mrr_at_20_diff1
value: 6.8079
- type: nauc_mrr_at_100_max
value: 12.779399999999999
- type: nauc_mrr_at_100_std
value: 8.5426
- type: nauc_mrr_at_100_diff1
value: 3.11
- type: nauc_mrr_at_1000_max
value: 12.587200000000001
- type: nauc_mrr_at_1000_std
value: 8.2159
- type: nauc_mrr_at_1000_diff1
value: 3.3531
- type: main_score
value: 1.8950000000000002
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-spa)
type: facebook/mlqa
config: ara-spa
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.621
- type: ndcg_at_3
value: 1.9449999999999998
- type: ndcg_at_5
value: 2.7470000000000003
- type: ndcg_at_10
value: 3.936
- type: ndcg_at_20
value: 6.0729999999999995
- type: ndcg_at_100
value: 16.366
- type: ndcg_at_1000
value: 19.769000000000002
- type: map_at_1
value: 0.621
- type: map_at_3
value: 1.553
- type: map_at_5
value: 2.019
- type: map_at_10
value: 2.5
- type: map_at_20
value: 3.055
- type: map_at_100
value: 4.247999999999999
- type: map_at_1000
value: 4.443
- type: recall_at_1
value: 0.621
- type: recall_at_3
value: 3.106
- type: recall_at_5
value: 4.968999999999999
- type: recall_at_10
value: 8.696
- type: recall_at_20
value: 17.391000000000002
- type: recall_at_100
value: 76.398
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.621
- type: precision_at_3
value: 1.035
- type: precision_at_5
value: 0.9939999999999999
- type: precision_at_10
value: 0.8699999999999999
- type: precision_at_20
value: 0.8699999999999999
- type: precision_at_100
value: 0.764
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.6211
- type: mrr_at_3
value: 1.5528
- type: mrr_at_5
value: 2.0185999999999997
- type: mrr_at_10
value: 2.4998
- type: mrr_at_20
value: 3.0547
- type: mrr_at_100
value: 4.2485
- type: mrr_at_1000
value: 4.4432
- type: nauc_ndcg_at_1_max
value: -49.7187
- type: nauc_ndcg_at_1_std
value: -49.7187
- type: nauc_ndcg_at_1_diff1
value: -20.5681
- type: nauc_ndcg_at_3_max
value: -40.8251
- type: nauc_ndcg_at_3_std
value: -30.895400000000002
- type: nauc_ndcg_at_3_diff1
value: -6.4114
- type: nauc_ndcg_at_5_max
value: -28.4846
- type: nauc_ndcg_at_5_std
value: -20.5221
- type: nauc_ndcg_at_5_diff1
value: -0.8007
- type: nauc_ndcg_at_10_max
value: -20.3348
- type: nauc_ndcg_at_10_std
value: -8.2217
- type: nauc_ndcg_at_10_diff1
value: 0.5930000000000001
- type: nauc_ndcg_at_20_max
value: -19.456699999999998
- type: nauc_ndcg_at_20_std
value: -9.5993
- type: nauc_ndcg_at_20_diff1
value: -2.6712
- type: nauc_ndcg_at_100_max
value: -15.7733
- type: nauc_ndcg_at_100_std
value: -5.1976
- type: nauc_ndcg_at_100_diff1
value: 3.029
- type: nauc_ndcg_at_1000_max
value: -21.9004
- type: nauc_ndcg_at_1000_std
value: -11.8486
- type: nauc_ndcg_at_1000_diff1
value: -2.4699
- type: nauc_map_at_1_max
value: -49.7187
- type: nauc_map_at_1_std
value: -49.7187
- type: nauc_map_at_1_diff1
value: -20.5681
- type: nauc_map_at_3_max
value: -42.530499999999996
- type: nauc_map_at_3_std
value: -34.239999999999995
- type: nauc_map_at_3_diff1
value: -8.7485
- type: nauc_map_at_5_max
value: -32.3882
- type: nauc_map_at_5_std
value: -25.2735
- type: nauc_map_at_5_diff1
value: -3.7768
- type: nauc_map_at_10_max
value: -26.5982
- type: nauc_map_at_10_std
value: -16.7374
- type: nauc_map_at_10_diff1
value: -2.3562
- type: nauc_map_at_20_max
value: -25.2884
- type: nauc_map_at_20_std
value: -16.1507
- type: nauc_map_at_20_diff1
value: -3.5117000000000003
- type: nauc_map_at_100_max
value: -24.921499999999998
- type: nauc_map_at_100_std
value: -15.5839
- type: nauc_map_at_100_diff1
value: -3.2183
- type: nauc_map_at_1000_max
value: -25.655499999999996
- type: nauc_map_at_1000_std
value: -16.3961
- type: nauc_map_at_1000_diff1
value: -3.8159
- type: nauc_recall_at_1_max
value: -49.7187
- type: nauc_recall_at_1_std
value: -49.7187
- type: nauc_recall_at_1_diff1
value: -20.5681
- type: nauc_recall_at_3_max
value: -38.1894
- type: nauc_recall_at_3_std
value: -25.753700000000002
- type: nauc_recall_at_3_diff1
value: -2.8386
- type: nauc_recall_at_5_max
value: -23.336000000000002
- type: nauc_recall_at_5_std
value: -14.365400000000001
- type: nauc_recall_at_5_diff1
value: 3.0241000000000002
- type: nauc_recall_at_10_max
value: -13.7581
- type: nauc_recall_at_10_std
value: 0.758
- type: nauc_recall_at_10_diff1
value: 3.3952999999999998
- type: nauc_recall_at_20_max
value: -15.1755
- type: nauc_recall_at_20_std
value: -5.1234
- type: nauc_recall_at_20_diff1
value: -2.7003
- type: nauc_recall_at_100_max
value: -3.2379
- type: nauc_recall_at_100_std
value: 8.405
- type: nauc_recall_at_100_diff1
value: 14.2268
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: -49.7187
- type: nauc_precision_at_1_std
value: -49.7187
- type: nauc_precision_at_1_diff1
value: -20.5681
- type: nauc_precision_at_3_max
value: -38.1894
- type: nauc_precision_at_3_std
value: -25.753700000000002
- type: nauc_precision_at_3_diff1
value: -2.8386
- type: nauc_precision_at_5_max
value: -23.336000000000002
- type: nauc_precision_at_5_std
value: -14.365400000000001
- type: nauc_precision_at_5_diff1
value: 3.0241000000000002
- type: nauc_precision_at_10_max
value: -13.7581
- type: nauc_precision_at_10_std
value: 0.758
- type: nauc_precision_at_10_diff1
value: 3.3952999999999998
- type: nauc_precision_at_20_max
value: -15.1755
- type: nauc_precision_at_20_std
value: -5.1234
- type: nauc_precision_at_20_diff1
value: -2.7003
- type: nauc_precision_at_100_max
value: -3.2379
- type: nauc_precision_at_100_std
value: 8.405
- type: nauc_precision_at_100_diff1
value: 14.2268
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: -49.7187
- type: nauc_mrr_at_1_std
value: -49.7187
- type: nauc_mrr_at_1_diff1
value: -20.5681
- type: nauc_mrr_at_3_max
value: -42.530499999999996
- type: nauc_mrr_at_3_std
value: -34.239999999999995
- type: nauc_mrr_at_3_diff1
value: -8.7485
- type: nauc_mrr_at_5_max
value: -32.3882
- type: nauc_mrr_at_5_std
value: -25.2735
- type: nauc_mrr_at_5_diff1
value: -3.7768
- type: nauc_mrr_at_10_max
value: -26.5982
- type: nauc_mrr_at_10_std
value: -16.7374
- type: nauc_mrr_at_10_diff1
value: -2.3562
- type: nauc_mrr_at_20_max
value: -25.2884
- type: nauc_mrr_at_20_std
value: -16.1507
- type: nauc_mrr_at_20_diff1
value: -3.5117000000000003
- type: nauc_mrr_at_100_max
value: -24.921499999999998
- type: nauc_mrr_at_100_std
value: -15.5839
- type: nauc_mrr_at_100_diff1
value: -3.2183
- type: nauc_mrr_at_1000_max
value: -25.655499999999996
- type: nauc_mrr_at_1000_std
value: -16.3961
- type: nauc_mrr_at_1000_diff1
value: -3.8159
- type: main_score
value: 3.936
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-hin)
type: facebook/mlqa
config: ara-hin
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 1.075
- type: ndcg_at_3
value: 1.952
- type: ndcg_at_5
value: 2.8080000000000003
- type: ndcg_at_10
value: 3.665
- type: ndcg_at_20
value: 5.686
- type: ndcg_at_100
value: 14.824000000000002
- type: ndcg_at_1000
value: 19.533
- type: map_at_1
value: 1.075
- type: map_at_3
value: 1.703
- type: map_at_5
value: 2.1590000000000003
- type: map_at_10
value: 2.5069999999999997
- type: map_at_20
value: 3.052
- type: map_at_100
value: 4.165
- type: map_at_1000
value: 4.431
- type: recall_at_1
value: 1.075
- type: recall_at_3
value: 2.688
- type: recall_at_5
value: 4.839
- type: recall_at_10
value: 7.527
- type: recall_at_20
value: 15.591
- type: recall_at_100
value: 67.204
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 1.075
- type: precision_at_3
value: 0.8959999999999999
- type: precision_at_5
value: 0.968
- type: precision_at_10
value: 0.753
- type: precision_at_20
value: 0.7799999999999999
- type: precision_at_100
value: 0.672
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 1.0753000000000001
- type: mrr_at_3
value: 1.7025
- type: mrr_at_5
value: 2.1595
- type: mrr_at_10
value: 2.5066
- type: mrr_at_20
value: 3.0518
- type: mrr_at_100
value: 4.165
- type: mrr_at_1000
value: 4.4308
- type: nauc_ndcg_at_1_max
value: 21.262700000000002
- type: nauc_ndcg_at_1_std
value: -41.7253
- type: nauc_ndcg_at_1_diff1
value: 21.262700000000002
- type: nauc_ndcg_at_3_max
value: 16.2895
- type: nauc_ndcg_at_3_std
value: -21.9452
- type: nauc_ndcg_at_3_diff1
value: 12.0077
- type: nauc_ndcg_at_5_max
value: 14.027999999999999
- type: nauc_ndcg_at_5_std
value: -5.2867999999999995
- type: nauc_ndcg_at_5_diff1
value: 1.3698
- type: nauc_ndcg_at_10_max
value: 6.0018
- type: nauc_ndcg_at_10_std
value: -9.074
- type: nauc_ndcg_at_10_diff1
value: 1.3088
- type: nauc_ndcg_at_20_max
value: -6.839
- type: nauc_ndcg_at_20_std
value: -17.1404
- type: nauc_ndcg_at_20_diff1
value: -12.3198
- type: nauc_ndcg_at_100_max
value: 2.491
- type: nauc_ndcg_at_100_std
value: -5.4581
- type: nauc_ndcg_at_100_diff1
value: -2.6779
- type: nauc_ndcg_at_1000_max
value: 0.6387999999999999
- type: nauc_ndcg_at_1000_std
value: -12.7081
- type: nauc_ndcg_at_1000_diff1
value: -5.937
- type: nauc_map_at_1_max
value: 21.262700000000002
- type: nauc_map_at_1_std
value: -41.7253
- type: nauc_map_at_1_diff1
value: 21.262700000000002
- type: nauc_map_at_3_max
value: 16.7498
- type: nauc_map_at_3_std
value: -25.7376
- type: nauc_map_at_3_diff1
value: 12.853
- type: nauc_map_at_5_max
value: 14.973
- type: nauc_map_at_5_std
value: -13.637099999999998
- type: nauc_map_at_5_diff1
value: 5.048699999999999
- type: nauc_map_at_10_max
value: 10.3348
- type: nauc_map_at_10_std
value: -14.7688
- type: nauc_map_at_10_diff1
value: 4.5799
- type: nauc_map_at_20_max
value: 2.9443
- type: nauc_map_at_20_std
value: -18.388299999999997
- type: nauc_map_at_20_diff1
value: -2.883
- type: nauc_map_at_100_max
value: 4.2533
- type: nauc_map_at_100_std
value: -15.348700000000001
- type: nauc_map_at_100_diff1
value: -2.0131
- type: nauc_map_at_1000_max
value: 4.2232
- type: nauc_map_at_1000_std
value: -16.1977
- type: nauc_map_at_1000_diff1
value: -2.1845
- type: nauc_recall_at_1_max
value: 21.262700000000002
- type: nauc_recall_at_1_std
value: -41.7253
- type: nauc_recall_at_1_diff1
value: 21.262700000000002
- type: nauc_recall_at_3_max
value: 15.5258
- type: nauc_recall_at_3_std
value: -14.8099
- type: nauc_recall_at_3_diff1
value: 10.6104
- type: nauc_recall_at_5_max
value: 12.767800000000001
- type: nauc_recall_at_5_std
value: 6.8180000000000005
- type: nauc_recall_at_5_diff1
value: -3.8459
- type: nauc_recall_at_10_max
value: 0.5512
- type: nauc_recall_at_10_std
value: -3.2002
- type: nauc_recall_at_10_diff1
value: -2.238
- type: nauc_recall_at_20_max
value: -15.572099999999999
- type: nauc_recall_at_20_std
value: -17.1781
- type: nauc_recall_at_20_diff1
value: -20.64
- type: nauc_recall_at_100_max
value: 5.5887
- type: nauc_recall_at_100_std
value: 6.551
- type: nauc_recall_at_100_diff1
value: 2.6925999999999997
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 21.262700000000002
- type: nauc_precision_at_1_std
value: -41.7253
- type: nauc_precision_at_1_diff1
value: 21.262700000000002
- type: nauc_precision_at_3_max
value: 15.5258
- type: nauc_precision_at_3_std
value: -14.8099
- type: nauc_precision_at_3_diff1
value: 10.6104
- type: nauc_precision_at_5_max
value: 12.767800000000001
- type: nauc_precision_at_5_std
value: 6.8180000000000005
- type: nauc_precision_at_5_diff1
value: -3.8459
- type: nauc_precision_at_10_max
value: 0.5512
- type: nauc_precision_at_10_std
value: -3.2002
- type: nauc_precision_at_10_diff1
value: -2.238
- type: nauc_precision_at_20_max
value: -15.572099999999999
- type: nauc_precision_at_20_std
value: -17.1781
- type: nauc_precision_at_20_diff1
value: -20.64
- type: nauc_precision_at_100_max
value: 5.5887
- type: nauc_precision_at_100_std
value: 6.551
- type: nauc_precision_at_100_diff1
value: 2.6925999999999997
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 21.262700000000002
- type: nauc_mrr_at_1_std
value: -41.7253
- type: nauc_mrr_at_1_diff1
value: 21.262700000000002
- type: nauc_mrr_at_3_max
value: 16.7498
- type: nauc_mrr_at_3_std
value: -25.7376
- type: nauc_mrr_at_3_diff1
value: 12.853
- type: nauc_mrr_at_5_max
value: 14.973
- type: nauc_mrr_at_5_std
value: -13.637099999999998
- type: nauc_mrr_at_5_diff1
value: 5.048699999999999
- type: nauc_mrr_at_10_max
value: 10.3348
- type: nauc_mrr_at_10_std
value: -14.7688
- type: nauc_mrr_at_10_diff1
value: 4.5799
- type: nauc_mrr_at_20_max
value: 2.9443
- type: nauc_mrr_at_20_std
value: -18.388299999999997
- type: nauc_mrr_at_20_diff1
value: -2.883
- type: nauc_mrr_at_100_max
value: 4.2533
- type: nauc_mrr_at_100_std
value: -15.348700000000001
- type: nauc_mrr_at_100_diff1
value: -2.0131
- type: nauc_mrr_at_1000_max
value: 4.2232
- type: nauc_mrr_at_1000_std
value: -16.1977
- type: nauc_mrr_at_1000_diff1
value: -2.1845
- type: main_score
value: 3.665
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-vie)
type: facebook/mlqa
config: ara-vie
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.613
- type: ndcg_at_3
value: 1.307
- type: ndcg_at_5
value: 1.307
- type: ndcg_at_10
value: 2.843
- type: ndcg_at_20
value: 5.175
- type: ndcg_at_100
value: 13.927
- type: ndcg_at_1000
value: 18.776
- type: map_at_1
value: 0.613
- type: map_at_3
value: 1.125
- type: map_at_5
value: 1.125
- type: map_at_10
value: 1.729
- type: map_at_20
value: 2.371
- type: map_at_100
value: 3.38
- type: map_at_1000
value: 3.6540000000000004
- type: recall_at_1
value: 0.613
- type: recall_at_3
value: 1.8399999999999999
- type: recall_at_5
value: 1.8399999999999999
- type: recall_at_10
value: 6.748
- type: recall_at_20
value: 15.951
- type: recall_at_100
value: 66.258
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.613
- type: precision_at_3
value: 0.613
- type: precision_at_5
value: 0.368
- type: precision_at_10
value: 0.675
- type: precision_at_20
value: 0.7979999999999999
- type: precision_at_100
value: 0.6629999999999999
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.6134999999999999
- type: mrr_at_3
value: 1.1247
- type: mrr_at_5
value: 1.1247
- type: mrr_at_10
value: 1.7287000000000001
- type: mrr_at_20
value: 2.3708
- type: mrr_at_100
value: 3.38
- type: mrr_at_1000
value: 3.6543
- type: nauc_ndcg_at_1_max
value: -6.955400000000001
- type: nauc_ndcg_at_1_std
value: 32.3707
- type: nauc_ndcg_at_1_diff1
value: -31.731199999999998
- type: nauc_ndcg_at_3_max
value: -5.0637
- type: nauc_ndcg_at_3_std
value: -7.6478
- type: nauc_ndcg_at_3_diff1
value: -31.9542
- type: nauc_ndcg_at_5_max
value: -5.0637
- type: nauc_ndcg_at_5_std
value: -7.6478
- type: nauc_ndcg_at_5_diff1
value: -31.9542
- type: nauc_ndcg_at_10_max
value: -5.5409
- type: nauc_ndcg_at_10_std
value: -5.2786
- type: nauc_ndcg_at_10_diff1
value: -14.349300000000001
- type: nauc_ndcg_at_20_max
value: 3.7065
- type: nauc_ndcg_at_20_std
value: -2.9243
- type: nauc_ndcg_at_20_diff1
value: -11.675
- type: nauc_ndcg_at_100_max
value: 5.6824
- type: nauc_ndcg_at_100_std
value: 4.7786
- type: nauc_ndcg_at_100_diff1
value: -15.0033
- type: nauc_ndcg_at_1000_max
value: 2.2786
- type: nauc_ndcg_at_1000_std
value: 1.9116000000000002
- type: nauc_ndcg_at_1000_diff1
value: -14.347299999999999
- type: nauc_map_at_1_max
value: -6.955400000000001
- type: nauc_map_at_1_std
value: 32.3707
- type: nauc_map_at_1_diff1
value: -31.731199999999998
- type: nauc_map_at_3_max
value: -6.5623000000000005
- type: nauc_map_at_3_std
value: -1.4144999999999999
- type: nauc_map_at_3_diff1
value: -32.321299999999994
- type: nauc_map_at_5_max
value: -6.5623000000000005
- type: nauc_map_at_5_std
value: -1.4144999999999999
- type: nauc_map_at_5_diff1
value: -32.321299999999994
- type: nauc_map_at_10_max
value: -5.9183
- type: nauc_map_at_10_std
value: -1.3847
- type: nauc_map_at_10_diff1
value: -21.0487
- type: nauc_map_at_20_max
value: -0.3147
- type: nauc_map_at_20_std
value: -0.8122
- type: nauc_map_at_20_diff1
value: -18.2027
- type: nauc_map_at_100_max
value: 0.5482
- type: nauc_map_at_100_std
value: 2.1596
- type: nauc_map_at_100_diff1
value: -17.8683
- type: nauc_map_at_1000_max
value: -0.0387
- type: nauc_map_at_1000_std
value: 1.7451999999999999
- type: nauc_map_at_1000_diff1
value: -17.9499
- type: nauc_recall_at_1_max
value: -6.955400000000001
- type: nauc_recall_at_1_std
value: 32.3707
- type: nauc_recall_at_1_diff1
value: -31.731199999999998
- type: nauc_recall_at_3_max
value: -2.1052999999999997
- type: nauc_recall_at_3_std
value: -18.885199999999998
- type: nauc_recall_at_3_diff1
value: -31.206699999999998
- type: nauc_recall_at_5_max
value: -2.1052999999999997
- type: nauc_recall_at_5_std
value: -18.885199999999998
- type: nauc_recall_at_5_diff1
value: -31.206699999999998
- type: nauc_recall_at_10_max
value: -5.5279
- type: nauc_recall_at_10_std
value: -8.5135
- type: nauc_recall_at_10_diff1
value: -7.7075000000000005
- type: nauc_recall_at_20_max
value: 6.4999
- type: nauc_recall_at_20_std
value: -3.8489000000000004
- type: nauc_recall_at_20_diff1
value: -7.310999999999999
- type: nauc_recall_at_100_max
value: 9.9534
- type: nauc_recall_at_100_std
value: 8.2841
- type: nauc_recall_at_100_diff1
value: -15.723300000000002
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: -6.955400000000001
- type: nauc_precision_at_1_std
value: 32.3707
- type: nauc_precision_at_1_diff1
value: -31.731199999999998
- type: nauc_precision_at_3_max
value: -2.1052999999999997
- type: nauc_precision_at_3_std
value: -18.885199999999998
- type: nauc_precision_at_3_diff1
value: -31.206699999999998
- type: nauc_precision_at_5_max
value: -2.1052999999999997
- type: nauc_precision_at_5_std
value: -18.885199999999998
- type: nauc_precision_at_5_diff1
value: -31.206699999999998
- type: nauc_precision_at_10_max
value: -5.5279
- type: nauc_precision_at_10_std
value: -8.5135
- type: nauc_precision_at_10_diff1
value: -7.7075000000000005
- type: nauc_precision_at_20_max
value: 6.4999
- type: nauc_precision_at_20_std
value: -3.8489000000000004
- type: nauc_precision_at_20_diff1
value: -7.310999999999999
- type: nauc_precision_at_100_max
value: 9.9534
- type: nauc_precision_at_100_std
value: 8.2841
- type: nauc_precision_at_100_diff1
value: -15.723300000000002
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: -6.955400000000001
- type: nauc_mrr_at_1_std
value: 32.3707
- type: nauc_mrr_at_1_diff1
value: -31.731199999999998
- type: nauc_mrr_at_3_max
value: -6.5623000000000005
- type: nauc_mrr_at_3_std
value: -1.4144999999999999
- type: nauc_mrr_at_3_diff1
value: -32.321299999999994
- type: nauc_mrr_at_5_max
value: -6.5623000000000005
- type: nauc_mrr_at_5_std
value: -1.4144999999999999
- type: nauc_mrr_at_5_diff1
value: -32.321299999999994
- type: nauc_mrr_at_10_max
value: -5.9183
- type: nauc_mrr_at_10_std
value: -1.3847
- type: nauc_mrr_at_10_diff1
value: -21.0487
- type: nauc_mrr_at_20_max
value: -0.3147
- type: nauc_mrr_at_20_std
value: -0.8122
- type: nauc_mrr_at_20_diff1
value: -18.2027
- type: nauc_mrr_at_100_max
value: 0.5482
- type: nauc_mrr_at_100_std
value: 2.1596
- type: nauc_mrr_at_100_diff1
value: -17.8683
- type: nauc_mrr_at_1000_max
value: -0.0387
- type: nauc_mrr_at_1000_std
value: 1.7451999999999999
- type: nauc_mrr_at_1000_diff1
value: -17.9499
- type: main_score
value: 2.843
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-zho)
type: facebook/mlqa
config: ara-zho
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.532
- type: ndcg_at_3
value: 1.133
- type: ndcg_at_5
value: 1.592
- type: ndcg_at_10
value: 3.001
- type: ndcg_at_20
value: 4.599
- type: ndcg_at_100
value: 13.530000000000001
- type: ndcg_at_1000
value: 18.706999999999997
- type: map_at_1
value: 0.532
- type: map_at_3
value: 0.975
- type: map_at_5
value: 1.2409999999999999
- type: map_at_10
value: 1.8419999999999999
- type: map_at_20
value: 2.273
- type: map_at_100
value: 3.3529999999999998
- type: map_at_1000
value: 3.642
- type: recall_at_1
value: 0.532
- type: recall_at_3
value: 1.5959999999999999
- type: recall_at_5
value: 2.6599999999999997
- type: recall_at_10
value: 6.915
- type: recall_at_20
value: 13.297999999999998
- type: recall_at_100
value: 63.83
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.532
- type: precision_at_3
value: 0.532
- type: precision_at_5
value: 0.532
- type: precision_at_10
value: 0.6910000000000001
- type: precision_at_20
value: 0.6649999999999999
- type: precision_at_100
value: 0.638
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.5319
- type: mrr_at_3
value: 0.9752000000000001
- type: mrr_at_5
value: 1.2411
- type: mrr_at_10
value: 1.8416
- type: mrr_at_20
value: 2.2734
- type: mrr_at_100
value: 3.3527
- type: mrr_at_1000
value: 3.6415
- type: nauc_ndcg_at_1_max
value: 100.0
- type: nauc_ndcg_at_1_std
value: 100.0
- type: nauc_ndcg_at_1_diff1
value: 100.0
- type: nauc_ndcg_at_3_max
value: 43.0668
- type: nauc_ndcg_at_3_std
value: 53.02329999999999
- type: nauc_ndcg_at_3_diff1
value: 42.2661
- type: nauc_ndcg_at_5_max
value: 15.126999999999999
- type: nauc_ndcg_at_5_std
value: 44.332899999999995
- type: nauc_ndcg_at_5_diff1
value: 18.2645
- type: nauc_ndcg_at_10_max
value: 19.707900000000002
- type: nauc_ndcg_at_10_std
value: 24.8599
- type: nauc_ndcg_at_10_diff1
value: 8.5712
- type: nauc_ndcg_at_20_max
value: 18.529999999999998
- type: nauc_ndcg_at_20_std
value: 23.8624
- type: nauc_ndcg_at_20_diff1
value: 3.8219999999999996
- type: nauc_ndcg_at_100_max
value: 13.3018
- type: nauc_ndcg_at_100_std
value: 13.919699999999999
- type: nauc_ndcg_at_100_diff1
value: 5.1807
- type: nauc_ndcg_at_1000_max
value: 15.4975
- type: nauc_ndcg_at_1000_std
value: 19.0027
- type: nauc_ndcg_at_1000_diff1
value: 10.5977
- type: nauc_map_at_1_max
value: 100.0
- type: nauc_map_at_1_std
value: 100.0
- type: nauc_map_at_1_diff1
value: 100.0
- type: nauc_map_at_3_max
value: 52.9714
- type: nauc_map_at_3_std
value: 62.1425
- type: nauc_map_at_3_diff1
value: 49.1278
- type: nauc_map_at_5_max
value: 30.0502
- type: nauc_map_at_5_std
value: 53.7191
- type: nauc_map_at_5_diff1
value: 29.7903
- type: nauc_map_at_10_max
value: 28.0566
- type: nauc_map_at_10_std
value: 37.3678
- type: nauc_map_at_10_diff1
value: 19.3192
- type: nauc_map_at_20_max
value: 24.929499999999997
- type: nauc_map_at_20_std
value: 34.0077
- type: nauc_map_at_20_diff1
value: 14.304
- type: nauc_map_at_100_max
value: 21.8729
- type: nauc_map_at_100_std
value: 27.860000000000003
- type: nauc_map_at_100_diff1
value: 15.3385
- type: nauc_map_at_1000_max
value: 22.311700000000002
- type: nauc_map_at_1000_std
value: 28.900100000000002
- type: nauc_map_at_1000_diff1
value: 16.1893
- type: nauc_recall_at_1_max
value: 100.0
- type: nauc_recall_at_1_std
value: 100.0
- type: nauc_recall_at_1_diff1
value: 100.0
- type: nauc_recall_at_3_max
value: 24.990000000000002
- type: nauc_recall_at_3_std
value: 36.1992
- type: nauc_recall_at_3_diff1
value: 30.3501
- type: nauc_recall_at_5_max
value: -6.6037
- type: nauc_recall_at_5_std
value: 30.852899999999998
- type: nauc_recall_at_5_diff1
value: 1.7645000000000002
- type: nauc_recall_at_10_max
value: 13.189899999999998
- type: nauc_recall_at_10_std
value: 13.314699999999998
- type: nauc_recall_at_10_diff1
value: -0.8269000000000001
- type: nauc_recall_at_20_max
value: 15.8802
- type: nauc_recall_at_20_std
value: 17.947499999999998
- type: nauc_recall_at_20_diff1
value: -2.5606
- type: nauc_recall_at_100_max
value: 9.5721
- type: nauc_recall_at_100_std
value: 6.9126
- type: nauc_recall_at_100_diff1
value: -2.2487
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 100.0
- type: nauc_precision_at_1_std
value: 100.0
- type: nauc_precision_at_1_diff1
value: 100.0
- type: nauc_precision_at_3_max
value: 24.990000000000002
- type: nauc_precision_at_3_std
value: 36.1992
- type: nauc_precision_at_3_diff1
value: 30.3501
- type: nauc_precision_at_5_max
value: -6.6037
- type: nauc_precision_at_5_std
value: 30.852899999999998
- type: nauc_precision_at_5_diff1
value: 1.7645000000000002
- type: nauc_precision_at_10_max
value: 13.189899999999998
- type: nauc_precision_at_10_std
value: 13.314699999999998
- type: nauc_precision_at_10_diff1
value: -0.8269000000000001
- type: nauc_precision_at_20_max
value: 15.8802
- type: nauc_precision_at_20_std
value: 17.947499999999998
- type: nauc_precision_at_20_diff1
value: -2.5606
- type: nauc_precision_at_100_max
value: 9.5721
- type: nauc_precision_at_100_std
value: 6.9126
- type: nauc_precision_at_100_diff1
value: -2.2487
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 100.0
- type: nauc_mrr_at_1_std
value: 100.0
- type: nauc_mrr_at_1_diff1
value: 100.0
- type: nauc_mrr_at_3_max
value: 52.9714
- type: nauc_mrr_at_3_std
value: 62.1425
- type: nauc_mrr_at_3_diff1
value: 49.1278
- type: nauc_mrr_at_5_max
value: 30.0502
- type: nauc_mrr_at_5_std
value: 53.7191
- type: nauc_mrr_at_5_diff1
value: 29.7903
- type: nauc_mrr_at_10_max
value: 28.0566
- type: nauc_mrr_at_10_std
value: 37.3678
- type: nauc_mrr_at_10_diff1
value: 19.3192
- type: nauc_mrr_at_20_max
value: 24.929499999999997
- type: nauc_mrr_at_20_std
value: 34.0077
- type: nauc_mrr_at_20_diff1
value: 14.304
- type: nauc_mrr_at_100_max
value: 21.8729
- type: nauc_mrr_at_100_std
value: 27.860000000000003
- type: nauc_mrr_at_100_diff1
value: 15.3385
- type: nauc_mrr_at_1000_max
value: 22.311700000000002
- type: nauc_mrr_at_1000_std
value: 28.900100000000002
- type: nauc_mrr_at_1000_diff1
value: 16.1893
- type: main_score
value: 3.001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-ara)
type: facebook/mlqa
config: deu-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.966
- type: ndcg_at_3
value: 2.122
- type: ndcg_at_5
value: 3.3070000000000004
- type: ndcg_at_10
value: 4.409
- type: ndcg_at_20
value: 5.734
- type: ndcg_at_100
value: 14.12
- type: ndcg_at_1000
value: 19.293
- type: map_at_1
value: 0.966
- type: map_at_3
value: 1.8519999999999999
- type: map_at_5
value: 2.504
- type: map_at_10
value: 2.965
- type: map_at_20
value: 3.318
- type: map_at_100
value: 4.249
- type: map_at_1000
value: 4.522
- type: recall_at_1
value: 0.966
- type: recall_at_3
value: 2.899
- type: recall_at_5
value: 5.797
- type: recall_at_10
value: 9.179
- type: recall_at_20
value: 14.493
- type: recall_at_100
value: 63.285000000000004
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.966
- type: precision_at_3
value: 0.966
- type: precision_at_5
value: 1.159
- type: precision_at_10
value: 0.918
- type: precision_at_20
value: 0.7250000000000001
- type: precision_at_100
value: 0.633
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.9662000000000001
- type: mrr_at_3
value: 1.8519
- type: mrr_at_5
value: 2.504
- type: mrr_at_10
value: 2.9648999999999996
- type: mrr_at_20
value: 3.3182000000000005
- type: mrr_at_100
value: 4.249
- type: mrr_at_1000
value: 4.5216
- type: nauc_ndcg_at_1_max
value: 100.0
- type: nauc_ndcg_at_1_std
value: 100.0
- type: nauc_ndcg_at_1_diff1
value: 54.942
- type: nauc_ndcg_at_3_max
value: 49.4196
- type: nauc_ndcg_at_3_std
value: 56.1838
- type: nauc_ndcg_at_3_diff1
value: 32.665499999999994
- type: nauc_ndcg_at_5_max
value: 40.9893
- type: nauc_ndcg_at_5_std
value: 47.916799999999995
- type: nauc_ndcg_at_5_diff1
value: 15.5136
- type: nauc_ndcg_at_10_max
value: 29.115299999999998
- type: nauc_ndcg_at_10_std
value: 32.858
- type: nauc_ndcg_at_10_diff1
value: 17.005300000000002
- type: nauc_ndcg_at_20_max
value: 31.2368
- type: nauc_ndcg_at_20_std
value: 21.3015
- type: nauc_ndcg_at_20_diff1
value: 18.6284
- type: nauc_ndcg_at_100_max
value: 25.645400000000002
- type: nauc_ndcg_at_100_std
value: 12.3866
- type: nauc_ndcg_at_100_diff1
value: 10.502
- type: nauc_ndcg_at_1000_max
value: 33.4067
- type: nauc_ndcg_at_1000_std
value: 24.5891
- type: nauc_ndcg_at_1000_diff1
value: 15.9563
- type: nauc_map_at_1_max
value: 100.0
- type: nauc_map_at_1_std
value: 100.0
- type: nauc_map_at_1_diff1
value: 54.942
- type: nauc_map_at_3_max
value: 56.2303
- type: nauc_map_at_3_std
value: 62.7938
- type: nauc_map_at_3_diff1
value: 35.7282
- type: nauc_map_at_5_max
value: 48.2731
- type: nauc_map_at_5_std
value: 55.2495
- type: nauc_map_at_5_diff1
value: 22.6228
- type: nauc_map_at_10_max
value: 39.508700000000005
- type: nauc_map_at_10_std
value: 44.6957
- type: nauc_map_at_10_diff1
value: 22.8637
- type: nauc_map_at_20_max
value: 39.6895
- type: nauc_map_at_20_std
value: 38.8865
- type: nauc_map_at_20_diff1
value: 23.1892
- type: nauc_map_at_100_max
value: 38.5582
- type: nauc_map_at_100_std
value: 35.4221
- type: nauc_map_at_100_diff1
value: 20.6822
- type: nauc_map_at_1000_max
value: 39.5093
- type: nauc_map_at_1000_std
value: 36.8263
- type: nauc_map_at_1000_diff1
value: 21.2755
- type: nauc_recall_at_1_max
value: 100.0
- type: nauc_recall_at_1_std
value: 100.0
- type: nauc_recall_at_1_diff1
value: 54.942
- type: nauc_recall_at_3_max
value: 36.7448
- type: nauc_recall_at_3_std
value: 43.7074
- type: nauc_recall_at_3_diff1
value: 26.950200000000002
- type: nauc_recall_at_5_max
value: 31.4159
- type: nauc_recall_at_5_std
value: 38.074200000000005
- type: nauc_recall_at_5_diff1
value: 5.5841
- type: nauc_recall_at_10_max
value: 17.8359
- type: nauc_recall_at_10_std
value: 19.564799999999998
- type: nauc_recall_at_10_diff1
value: 10.7378
- type: nauc_recall_at_20_max
value: 24.5378
- type: nauc_recall_at_20_std
value: 3.8707
- type: nauc_recall_at_20_diff1
value: 15.1151
- type: nauc_recall_at_100_max
value: 12.8051
- type: nauc_recall_at_100_std
value: -9.097900000000001
- type: nauc_recall_at_100_diff1
value: 0.7080000000000001
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 100.0
- type: nauc_precision_at_1_std
value: 100.0
- type: nauc_precision_at_1_diff1
value: 54.942
- type: nauc_precision_at_3_max
value: 36.7448
- type: nauc_precision_at_3_std
value: 43.7074
- type: nauc_precision_at_3_diff1
value: 26.950200000000002
- type: nauc_precision_at_5_max
value: 31.4159
- type: nauc_precision_at_5_std
value: 38.074200000000005
- type: nauc_precision_at_5_diff1
value: 5.5841
- type: nauc_precision_at_10_max
value: 17.8359
- type: nauc_precision_at_10_std
value: 19.564799999999998
- type: nauc_precision_at_10_diff1
value: 10.7378
- type: nauc_precision_at_20_max
value: 24.5378
- type: nauc_precision_at_20_std
value: 3.8707
- type: nauc_precision_at_20_diff1
value: 15.1151
- type: nauc_precision_at_100_max
value: 12.8051
- type: nauc_precision_at_100_std
value: -9.097900000000001
- type: nauc_precision_at_100_diff1
value: 0.7080000000000001
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 100.0
- type: nauc_mrr_at_1_std
value: 100.0
- type: nauc_mrr_at_1_diff1
value: 54.942
- type: nauc_mrr_at_3_max
value: 56.2303
- type: nauc_mrr_at_3_std
value: 62.7938
- type: nauc_mrr_at_3_diff1
value: 35.7282
- type: nauc_mrr_at_5_max
value: 48.2731
- type: nauc_mrr_at_5_std
value: 55.2495
- type: nauc_mrr_at_5_diff1
value: 22.6228
- type: nauc_mrr_at_10_max
value: 39.508700000000005
- type: nauc_mrr_at_10_std
value: 44.6957
- type: nauc_mrr_at_10_diff1
value: 22.8637
- type: nauc_mrr_at_20_max
value: 39.6895
- type: nauc_mrr_at_20_std
value: 38.8865
- type: nauc_mrr_at_20_diff1
value: 23.1892
- type: nauc_mrr_at_100_max
value: 38.5582
- type: nauc_mrr_at_100_std
value: 35.4221
- type: nauc_mrr_at_100_diff1
value: 20.6822
- type: nauc_mrr_at_1000_max
value: 39.5093
- type: nauc_mrr_at_1000_std
value: 36.8263
- type: nauc_mrr_at_1000_diff1
value: 21.2755
- type: main_score
value: 4.409
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-ara)
type: facebook/mlqa
config: eng-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.774
- type: ndcg_at_3
value: 1.745
- type: ndcg_at_5
value: 2.2030000000000003
- type: ndcg_at_10
value: 2.635
- type: ndcg_at_20
value: 3.514
- type: ndcg_at_100
value: 8.031
- type: ndcg_at_1000
value: 16.525000000000002
- type: map_at_1
value: 0.774
- type: map_at_3
value: 1.4829999999999999
- type: map_at_5
value: 1.725
- type: map_at_10
value: 1.9
- type: map_at_20
value: 2.1399999999999997
- type: map_at_100
value: 2.71
- type: map_at_1000
value: 3.0220000000000002
- type: recall_at_1
value: 0.774
- type: recall_at_3
value: 2.5149999999999997
- type: recall_at_5
value: 3.675
- type: recall_at_10
value: 5.029
- type: recall_at_20
value: 8.511000000000001
- type: recall_at_100
value: 33.656000000000006
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.774
- type: precision_at_3
value: 0.8380000000000001
- type: precision_at_5
value: 0.735
- type: precision_at_10
value: 0.503
- type: precision_at_20
value: 0.426
- type: precision_at_100
value: 0.337
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.7736999999999999
- type: mrr_at_3
value: 1.4829
- type: mrr_at_5
value: 1.7247
- type: mrr_at_10
value: 1.8998000000000002
- type: mrr_at_20
value: 2.1399999999999997
- type: mrr_at_100
value: 2.71
- type: mrr_at_1000
value: 3.0224
- type: nauc_ndcg_at_1_max
value: 60.5507
- type: nauc_ndcg_at_1_std
value: 17.7109
- type: nauc_ndcg_at_1_diff1
value: 69.8508
- type: nauc_ndcg_at_3_max
value: 17.8387
- type: nauc_ndcg_at_3_std
value: -12.759699999999999
- type: nauc_ndcg_at_3_diff1
value: 32.9363
- type: nauc_ndcg_at_5_max
value: 13.933300000000001
- type: nauc_ndcg_at_5_std
value: -7.4468000000000005
- type: nauc_ndcg_at_5_diff1
value: 34.0875
- type: nauc_ndcg_at_10_max
value: 24.0901
- type: nauc_ndcg_at_10_std
value: -1.9087
- type: nauc_ndcg_at_10_diff1
value: 30.859199999999998
- type: nauc_ndcg_at_20_max
value: 14.4843
- type: nauc_ndcg_at_20_std
value: -2.4103
- type: nauc_ndcg_at_20_diff1
value: 25.251800000000003
- type: nauc_ndcg_at_100_max
value: 11.147400000000001
- type: nauc_ndcg_at_100_std
value: 0.5721
- type: nauc_ndcg_at_100_diff1
value: 18.865499999999997
- type: nauc_ndcg_at_1000_max
value: 14.3921
- type: nauc_ndcg_at_1000_std
value: -1.4730999999999999
- type: nauc_ndcg_at_1000_diff1
value: 23.5761
- type: nauc_map_at_1_max
value: 60.5507
- type: nauc_map_at_1_std
value: 17.7109
- type: nauc_map_at_1_diff1
value: 69.8508
- type: nauc_map_at_3_max
value: 23.5728
- type: nauc_map_at_3_std
value: -8.4614
- type: nauc_map_at_3_diff1
value: 37.580000000000005
- type: nauc_map_at_5_max
value: 20.072300000000002
- type: nauc_map_at_5_std
value: -5.5798
- type: nauc_map_at_5_diff1
value: 37.894800000000004
- type: nauc_map_at_10_max
value: 25.3164
- type: nauc_map_at_10_std
value: -2.6436
- type: nauc_map_at_10_diff1
value: 35.591
- type: nauc_map_at_20_max
value: 20.962
- type: nauc_map_at_20_std
value: -2.7786999999999997
- type: nauc_map_at_20_diff1
value: 32.562999999999995
- type: nauc_map_at_100_max
value: 19.2988
- type: nauc_map_at_100_std
value: -1.6022
- type: nauc_map_at_100_diff1
value: 30.2483
- type: nauc_map_at_1000_max
value: 19.542399999999997
- type: nauc_map_at_1000_std
value: -1.9428
- type: nauc_map_at_1000_diff1
value: 30.5552
- type: nauc_recall_at_1_max
value: 60.5507
- type: nauc_recall_at_1_std
value: 17.7109
- type: nauc_recall_at_1_diff1
value: 69.8508
- type: nauc_recall_at_3_max
value: 7.9922
- type: nauc_recall_at_3_std
value: -20.188
- type: nauc_recall_at_3_diff1
value: 25.0336
- type: nauc_recall_at_5_max
value: 5.2796
- type: nauc_recall_at_5_std
value: -9.5635
- type: nauc_recall_at_5_diff1
value: 28.912900000000004
- type: nauc_recall_at_10_max
value: 24.0746
- type: nauc_recall_at_10_std
value: 0.1106
- type: nauc_recall_at_10_diff1
value: 25.271
- type: nauc_recall_at_20_max
value: 8.2207
- type: nauc_recall_at_20_std
value: -1.5499
- type: nauc_recall_at_20_diff1
value: 18.351200000000002
- type: nauc_recall_at_100_max
value: 6.2993
- type: nauc_recall_at_100_std
value: 2.1907
- type: nauc_recall_at_100_diff1
value: 11.477
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 60.5507
- type: nauc_precision_at_1_std
value: 17.7109
- type: nauc_precision_at_1_diff1
value: 69.8508
- type: nauc_precision_at_3_max
value: 7.9922
- type: nauc_precision_at_3_std
value: -20.188
- type: nauc_precision_at_3_diff1
value: 25.0336
- type: nauc_precision_at_5_max
value: 5.2796
- type: nauc_precision_at_5_std
value: -9.5635
- type: nauc_precision_at_5_diff1
value: 28.912900000000004
- type: nauc_precision_at_10_max
value: 24.0746
- type: nauc_precision_at_10_std
value: 0.1106
- type: nauc_precision_at_10_diff1
value: 25.271
- type: nauc_precision_at_20_max
value: 8.2207
- type: nauc_precision_at_20_std
value: -1.5499
- type: nauc_precision_at_20_diff1
value: 18.351200000000002
- type: nauc_precision_at_100_max
value: 6.2993
- type: nauc_precision_at_100_std
value: 2.1907
- type: nauc_precision_at_100_diff1
value: 11.477
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 60.5507
- type: nauc_mrr_at_1_std
value: 17.7109
- type: nauc_mrr_at_1_diff1
value: 69.8508
- type: nauc_mrr_at_3_max
value: 23.5728
- type: nauc_mrr_at_3_std
value: -8.4614
- type: nauc_mrr_at_3_diff1
value: 37.580000000000005
- type: nauc_mrr_at_5_max
value: 20.072300000000002
- type: nauc_mrr_at_5_std
value: -5.5798
- type: nauc_mrr_at_5_diff1
value: 37.894800000000004
- type: nauc_mrr_at_10_max
value: 25.3164
- type: nauc_mrr_at_10_std
value: -2.6436
- type: nauc_mrr_at_10_diff1
value: 35.591
- type: nauc_mrr_at_20_max
value: 20.962
- type: nauc_mrr_at_20_std
value: -2.7786999999999997
- type: nauc_mrr_at_20_diff1
value: 32.562999999999995
- type: nauc_mrr_at_100_max
value: 19.2988
- type: nauc_mrr_at_100_std
value: -1.6022
- type: nauc_mrr_at_100_diff1
value: 30.2483
- type: nauc_mrr_at_1000_max
value: 19.542399999999997
- type: nauc_mrr_at_1000_std
value: -1.9428
- type: nauc_mrr_at_1000_diff1
value: 30.5552
- type: main_score
value: 2.635
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-ara)
type: facebook/mlqa
config: spa-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 1.863
- type: ndcg_at_3
value: 3.66
- type: ndcg_at_5
value: 4.168
- type: ndcg_at_10
value: 5.173
- type: ndcg_at_20
value: 7.7090000000000005
- type: ndcg_at_100
value: 17.645
- type: ndcg_at_1000
value: 21.322
- type: map_at_1
value: 1.863
- type: map_at_3
value: 3.209
- type: map_at_5
value: 3.489
- type: map_at_10
value: 3.904
- type: map_at_20
value: 4.612
- type: map_at_100
value: 5.858
- type: map_at_1000
value: 6.069999999999999
- type: recall_at_1
value: 1.863
- type: recall_at_3
value: 4.968999999999999
- type: recall_at_5
value: 6.211
- type: recall_at_10
value: 9.317
- type: recall_at_20
value: 19.255
- type: recall_at_100
value: 74.534
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 1.863
- type: precision_at_3
value: 1.656
- type: precision_at_5
value: 1.242
- type: precision_at_10
value: 0.932
- type: precision_at_20
value: 0.963
- type: precision_at_100
value: 0.745
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 1.8634000000000002
- type: mrr_at_3
value: 3.2091000000000003
- type: mrr_at_5
value: 3.4886
- type: mrr_at_10
value: 3.9044000000000003
- type: mrr_at_20
value: 4.612299999999999
- type: mrr_at_100
value: 5.8578
- type: mrr_at_1000
value: 6.0696
- type: nauc_ndcg_at_1_max
value: 59.8106
- type: nauc_ndcg_at_1_std
value: 41.6091
- type: nauc_ndcg_at_1_diff1
value: 15.8988
- type: nauc_ndcg_at_3_max
value: 17.326900000000002
- type: nauc_ndcg_at_3_std
value: 0.8758
- type: nauc_ndcg_at_3_diff1
value: -13.537199999999999
- type: nauc_ndcg_at_5_max
value: 17.0792
- type: nauc_ndcg_at_5_std
value: -4.134
- type: nauc_ndcg_at_5_diff1
value: -14.3938
- type: nauc_ndcg_at_10_max
value: 19.2218
- type: nauc_ndcg_at_10_std
value: -4.1131
- type: nauc_ndcg_at_10_diff1
value: -0.5739
- type: nauc_ndcg_at_20_max
value: 14.7981
- type: nauc_ndcg_at_20_std
value: -0.0645
- type: nauc_ndcg_at_20_diff1
value: -1.8365
- type: nauc_ndcg_at_100_max
value: 20.259
- type: nauc_ndcg_at_100_std
value: 3.2459000000000002
- type: nauc_ndcg_at_100_diff1
value: -3.5298999999999996
- type: nauc_ndcg_at_1000_max
value: 18.958
- type: nauc_ndcg_at_1000_std
value: 2.0313999999999997
- type: nauc_ndcg_at_1000_diff1
value: -3.6224
- type: nauc_map_at_1_max
value: 59.8106
- type: nauc_map_at_1_std
value: 41.6091
- type: nauc_map_at_1_diff1
value: 15.8988
- type: nauc_map_at_3_max
value: 23.4457
- type: nauc_map_at_3_std
value: 6.589200000000001
- type: nauc_map_at_3_diff1
value: -9.1205
- type: nauc_map_at_5_max
value: 23.0402
- type: nauc_map_at_5_std
value: 2.8784
- type: nauc_map_at_5_diff1
value: -10.0377
- type: nauc_map_at_10_max
value: 23.477
- type: nauc_map_at_10_std
value: 1.9317999999999997
- type: nauc_map_at_10_diff1
value: -3.1433000000000004
- type: nauc_map_at_20_max
value: 21.138199999999998
- type: nauc_map_at_20_std
value: 3.3765000000000005
- type: nauc_map_at_20_diff1
value: -3.2526
- type: nauc_map_at_100_max
value: 21.8857
- type: nauc_map_at_100_std
value: 4.147
- type: nauc_map_at_100_diff1
value: -3.5649
- type: nauc_map_at_1000_max
value: 21.8479
- type: nauc_map_at_1000_std
value: 4.0359
- type: nauc_map_at_1000_diff1
value: -3.5894000000000004
- type: nauc_recall_at_1_max
value: 59.8106
- type: nauc_recall_at_1_std
value: 41.6091
- type: nauc_recall_at_1_diff1
value: 15.8988
- type: nauc_recall_at_3_max
value: 5.8776
- type: nauc_recall_at_3_std
value: -9.775
- type: nauc_recall_at_3_diff1
value: -21.8474
- type: nauc_recall_at_5_max
value: 7.184799999999999
- type: nauc_recall_at_5_std
value: -15.965399999999999
- type: nauc_recall_at_5_diff1
value: -21.5915
- type: nauc_recall_at_10_max
value: 14.3481
- type: nauc_recall_at_10_std
value: -11.5027
- type: nauc_recall_at_10_diff1
value: 5.0225
- type: nauc_recall_at_20_max
value: 8.8023
- type: nauc_recall_at_20_std
value: -2.2973
- type: nauc_recall_at_20_diff1
value: 0.2097
- type: nauc_recall_at_100_max
value: 23.613799999999998
- type: nauc_recall_at_100_std
value: 5.728599999999999
- type: nauc_recall_at_100_diff1
value: -3.4857
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 59.8106
- type: nauc_precision_at_1_std
value: 41.6091
- type: nauc_precision_at_1_diff1
value: 15.8988
- type: nauc_precision_at_3_max
value: 5.8776
- type: nauc_precision_at_3_std
value: -9.775
- type: nauc_precision_at_3_diff1
value: -21.8474
- type: nauc_precision_at_5_max
value: 7.184799999999999
- type: nauc_precision_at_5_std
value: -15.965399999999999
- type: nauc_precision_at_5_diff1
value: -21.5915
- type: nauc_precision_at_10_max
value: 14.3481
- type: nauc_precision_at_10_std
value: -11.5027
- type: nauc_precision_at_10_diff1
value: 5.0225
- type: nauc_precision_at_20_max
value: 8.8023
- type: nauc_precision_at_20_std
value: -2.2973
- type: nauc_precision_at_20_diff1
value: 0.2097
- type: nauc_precision_at_100_max
value: 23.613799999999998
- type: nauc_precision_at_100_std
value: 5.728599999999999
- type: nauc_precision_at_100_diff1
value: -3.4857
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 59.8106
- type: nauc_mrr_at_1_std
value: 41.6091
- type: nauc_mrr_at_1_diff1
value: 15.8988
- type: nauc_mrr_at_3_max
value: 23.4457
- type: nauc_mrr_at_3_std
value: 6.589200000000001
- type: nauc_mrr_at_3_diff1
value: -9.1205
- type: nauc_mrr_at_5_max
value: 23.0402
- type: nauc_mrr_at_5_std
value: 2.8784
- type: nauc_mrr_at_5_diff1
value: -10.0377
- type: nauc_mrr_at_10_max
value: 23.477
- type: nauc_mrr_at_10_std
value: 1.9317999999999997
- type: nauc_mrr_at_10_diff1
value: -3.1433000000000004
- type: nauc_mrr_at_20_max
value: 21.138199999999998
- type: nauc_mrr_at_20_std
value: 3.3765000000000005
- type: nauc_mrr_at_20_diff1
value: -3.2526
- type: nauc_mrr_at_100_max
value: 21.8857
- type: nauc_mrr_at_100_std
value: 4.147
- type: nauc_mrr_at_100_diff1
value: -3.5649
- type: nauc_mrr_at_1000_max
value: 21.8479
- type: nauc_mrr_at_1000_std
value: 4.0359
- type: nauc_mrr_at_1000_diff1
value: -3.5894000000000004
- type: main_score
value: 5.173
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (hin-ara)
type: facebook/mlqa
config: hin-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.538
- type: ndcg_at_3
value: 2.3619999999999997
- type: ndcg_at_5
value: 3.496
- type: ndcg_at_10
value: 4.166
- type: ndcg_at_20
value: 5.763
- type: ndcg_at_100
value: 16.819
- type: ndcg_at_1000
value: 20.063
- type: map_at_1
value: 0.538
- type: map_at_3
value: 1.882
- type: map_at_5
value: 2.527
- type: map_at_10
value: 2.79
- type: map_at_20
value: 3.2079999999999997
- type: map_at_100
value: 4.555
- type: map_at_1000
value: 4.7379999999999995
- type: recall_at_1
value: 0.538
- type: recall_at_3
value: 3.763
- type: recall_at_5
value: 6.451999999999999
- type: recall_at_10
value: 8.602
- type: recall_at_20
value: 15.054
- type: recall_at_100
value: 77.41900000000001
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.538
- type: precision_at_3
value: 1.254
- type: precision_at_5
value: 1.29
- type: precision_at_10
value: 0.86
- type: precision_at_20
value: 0.753
- type: precision_at_100
value: 0.774
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.5376
- type: mrr_at_3
value: 1.8817
- type: mrr_at_5
value: 2.5269
- type: mrr_at_10
value: 2.7897000000000003
- type: mrr_at_20
value: 3.2081999999999997
- type: mrr_at_100
value: 4.554600000000001
- type: mrr_at_1000
value: 4.7382
- type: nauc_ndcg_at_1_max
value: 100.0
- type: nauc_ndcg_at_1_std
value: 66.7257
- type: nauc_ndcg_at_1_diff1
value: 100.0
- type: nauc_ndcg_at_3_max
value: 29.630000000000003
- type: nauc_ndcg_at_3_std
value: 57.101400000000005
- type: nauc_ndcg_at_3_diff1
value: 22.5155
- type: nauc_ndcg_at_5_max
value: 8.1457
- type: nauc_ndcg_at_5_std
value: 43.9017
- type: nauc_ndcg_at_5_diff1
value: 12.2764
- type: nauc_ndcg_at_10_max
value: 10.8742
- type: nauc_ndcg_at_10_std
value: 35.634100000000004
- type: nauc_ndcg_at_10_diff1
value: 16.8804
- type: nauc_ndcg_at_20_max
value: 8.2366
- type: nauc_ndcg_at_20_std
value: 34.4244
- type: nauc_ndcg_at_20_diff1
value: 10.3725
- type: nauc_ndcg_at_100_max
value: 7.661900000000001
- type: nauc_ndcg_at_100_std
value: 24.1541
- type: nauc_ndcg_at_100_diff1
value: 8.6735
- type: nauc_ndcg_at_1000_max
value: 9.024899999999999
- type: nauc_ndcg_at_1000_std
value: 31.385099999999998
- type: nauc_ndcg_at_1000_diff1
value: 11.6807
- type: nauc_map_at_1_max
value: 100.0
- type: nauc_map_at_1_std
value: 66.7257
- type: nauc_map_at_1_diff1
value: 100.0
- type: nauc_map_at_3_max
value: 37.627500000000005
- type: nauc_map_at_3_std
value: 59.4071
- type: nauc_map_at_3_diff1
value: 27.9837
- type: nauc_map_at_5_max
value: 18.7887
- type: nauc_map_at_5_std
value: 48.7344
- type: nauc_map_at_5_diff1
value: 18.7448
- type: nauc_map_at_10_max
value: 19.7517
- type: nauc_map_at_10_std
value: 43.2046
- type: nauc_map_at_10_diff1
value: 21.3488
- type: nauc_map_at_20_max
value: 17.3749
- type: nauc_map_at_20_std
value: 41.8178
- type: nauc_map_at_20_diff1
value: 17.8946
- type: nauc_map_at_100_max
value: 15.4
- type: nauc_map_at_100_std
value: 37.7516
- type: nauc_map_at_100_diff1
value: 16.4172
- type: nauc_map_at_1000_max
value: 15.743099999999998
- type: nauc_map_at_1000_std
value: 38.642700000000005
- type: nauc_map_at_1000_diff1
value: 16.8576
- type: nauc_recall_at_1_max
value: 100.0
- type: nauc_recall_at_1_std
value: 66.7257
- type: nauc_recall_at_1_diff1
value: 100.0
- type: nauc_recall_at_3_max
value: 17.4401
- type: nauc_recall_at_3_std
value: 53.4353
- type: nauc_recall_at_3_diff1
value: 14.5988
- type: nauc_recall_at_5_max
value: -5.2527
- type: nauc_recall_at_5_std
value: 37.5174
- type: nauc_recall_at_5_diff1
value: 4.3982
- type: nauc_recall_at_10_max
value: 1.6920000000000002
- type: nauc_recall_at_10_std
value: 26.655299999999997
- type: nauc_recall_at_10_diff1
value: 12.6153
- type: nauc_recall_at_20_max
value: 1.2351
- type: nauc_recall_at_20_std
value: 28.0528
- type: nauc_recall_at_20_diff1
value: 3.728
- type: nauc_recall_at_100_max
value: 4.7833
- type: nauc_recall_at_100_std
value: 8.0403
- type: nauc_recall_at_100_diff1
value: 2.0422
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 100.0
- type: nauc_precision_at_1_std
value: 66.7257
- type: nauc_precision_at_1_diff1
value: 100.0
- type: nauc_precision_at_3_max
value: 17.4401
- type: nauc_precision_at_3_std
value: 53.4353
- type: nauc_precision_at_3_diff1
value: 14.5988
- type: nauc_precision_at_5_max
value: -5.2527
- type: nauc_precision_at_5_std
value: 37.5174
- type: nauc_precision_at_5_diff1
value: 4.3982
- type: nauc_precision_at_10_max
value: 1.6920000000000002
- type: nauc_precision_at_10_std
value: 26.655299999999997
- type: nauc_precision_at_10_diff1
value: 12.6153
- type: nauc_precision_at_20_max
value: 1.2351
- type: nauc_precision_at_20_std
value: 28.0528
- type: nauc_precision_at_20_diff1
value: 3.728
- type: nauc_precision_at_100_max
value: 4.7833
- type: nauc_precision_at_100_std
value: 8.0403
- type: nauc_precision_at_100_diff1
value: 2.0422
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 100.0
- type: nauc_mrr_at_1_std
value: 66.7257
- type: nauc_mrr_at_1_diff1
value: 100.0
- type: nauc_mrr_at_3_max
value: 37.627500000000005
- type: nauc_mrr_at_3_std
value: 59.4071
- type: nauc_mrr_at_3_diff1
value: 27.9837
- type: nauc_mrr_at_5_max
value: 18.7887
- type: nauc_mrr_at_5_std
value: 48.7344
- type: nauc_mrr_at_5_diff1
value: 18.7448
- type: nauc_mrr_at_10_max
value: 19.7517
- type: nauc_mrr_at_10_std
value: 43.2046
- type: nauc_mrr_at_10_diff1
value: 21.3488
- type: nauc_mrr_at_20_max
value: 17.3749
- type: nauc_mrr_at_20_std
value: 41.8178
- type: nauc_mrr_at_20_diff1
value: 17.8946
- type: nauc_mrr_at_100_max
value: 15.4
- type: nauc_mrr_at_100_std
value: 37.7516
- type: nauc_mrr_at_100_diff1
value: 16.4172
- type: nauc_mrr_at_1000_max
value: 15.743099999999998
- type: nauc_mrr_at_1000_std
value: 38.642700000000005
- type: nauc_mrr_at_1000_diff1
value: 16.8576
- type: main_score
value: 4.166
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (vie-ara)
type: facebook/mlqa
config: vie-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.0
- type: ndcg_at_3
value: 0.694
- type: ndcg_at_5
value: 1.222
- type: ndcg_at_10
value: 2.809
- type: ndcg_at_20
value: 5.146
- type: ndcg_at_100
value: 14.91
- type: ndcg_at_1000
value: 18.864
- type: map_at_1
value: 0.0
- type: map_at_3
value: 0.511
- type: map_at_5
value: 0.818
- type: map_at_10
value: 1.47
- type: map_at_20
value: 2.12
- type: map_at_100
value: 3.2649999999999997
- type: map_at_1000
value: 3.485
- type: recall_at_1
value: 0.0
- type: recall_at_3
value: 1.2269999999999999
- type: recall_at_5
value: 2.4539999999999997
- type: recall_at_10
value: 7.362
- type: recall_at_20
value: 16.564
- type: recall_at_100
value: 72.393
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.0
- type: precision_at_3
value: 0.409
- type: precision_at_5
value: 0.49100000000000005
- type: precision_at_10
value: 0.736
- type: precision_at_20
value: 0.828
- type: precision_at_100
value: 0.724
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.0
- type: mrr_at_3
value: 0.5112
- type: mrr_at_5
value: 0.818
- type: mrr_at_10
value: 1.4704
- type: mrr_at_20
value: 2.12
- type: mrr_at_100
value: 3.2646
- type: mrr_at_1000
value: 3.4854999999999996
- type: nauc_ndcg_at_1_max
value: .nan
- type: nauc_ndcg_at_1_std
value: .nan
- type: nauc_ndcg_at_1_diff1
value: .nan
- type: nauc_ndcg_at_3_max
value: -7.0496
- type: nauc_ndcg_at_3_std
value: -32.1514
- type: nauc_ndcg_at_3_diff1
value: -18.6811
- type: nauc_ndcg_at_5_max
value: 13.1797
- type: nauc_ndcg_at_5_std
value: -24.1903
- type: nauc_ndcg_at_5_diff1
value: -29.849500000000003
- type: nauc_ndcg_at_10_max
value: 27.9005
- type: nauc_ndcg_at_10_std
value: -17.3769
- type: nauc_ndcg_at_10_diff1
value: -12.732299999999999
- type: nauc_ndcg_at_20_max
value: 21.567700000000002
- type: nauc_ndcg_at_20_std
value: -4.7954
- type: nauc_ndcg_at_20_diff1
value: -11.060599999999999
- type: nauc_ndcg_at_100_max
value: 11.6238
- type: nauc_ndcg_at_100_std
value: -5.933999999999999
- type: nauc_ndcg_at_100_diff1
value: -2.0311
- type: nauc_ndcg_at_1000_max
value: 17.6537
- type: nauc_ndcg_at_1000_std
value: -8.9981
- type: nauc_ndcg_at_1000_diff1
value: -5.7923
- type: nauc_map_at_1_max
value: .nan
- type: nauc_map_at_1_std
value: .nan
- type: nauc_map_at_1_diff1
value: .nan
- type: nauc_map_at_3_max
value: -8.3328
- type: nauc_map_at_3_std
value: -33.029399999999995
- type: nauc_map_at_3_diff1
value: -20.842299999999998
- type: nauc_map_at_5_max
value: 9.694600000000001
- type: nauc_map_at_5_std
value: -25.795
- type: nauc_map_at_5_diff1
value: -29.718899999999998
- type: nauc_map_at_10_max
value: 24.2406
- type: nauc_map_at_10_std
value: -19.192899999999998
- type: nauc_map_at_10_diff1
value: -16.1405
- type: nauc_map_at_20_max
value: 20.515800000000002
- type: nauc_map_at_20_std
value: -10.6617
- type: nauc_map_at_20_diff1
value: -14.4404
- type: nauc_map_at_100_max
value: 17.603099999999998
- type: nauc_map_at_100_std
value: -11.405
- type: nauc_map_at_100_diff1
value: -9.4802
- type: nauc_map_at_1000_max
value: 18.4729
- type: nauc_map_at_1000_std
value: -11.7628
- type: nauc_map_at_1000_diff1
value: -10.1215
- type: nauc_recall_at_1_max
value: .nan
- type: nauc_recall_at_1_std
value: .nan
- type: nauc_recall_at_1_diff1
value: .nan
- type: nauc_recall_at_3_max
value: -5.286
- type: nauc_recall_at_3_std
value: -30.9445
- type: nauc_recall_at_3_diff1
value: -15.7106
- type: nauc_recall_at_5_max
value: 17.227
- type: nauc_recall_at_5_std
value: -22.3411
- type: nauc_recall_at_5_diff1
value: -30.111900000000002
- type: nauc_recall_at_10_max
value: 30.406
- type: nauc_recall_at_10_std
value: -16.0824
- type: nauc_recall_at_10_diff1
value: -9.9285
- type: nauc_recall_at_20_max
value: 21.794900000000002
- type: nauc_recall_at_20_std
value: -0.7081
- type: nauc_recall_at_20_diff1
value: -8.8937
- type: nauc_recall_at_100_max
value: 3.2778
- type: nauc_recall_at_100_std
value: -0.6836
- type: nauc_recall_at_100_diff1
value: 3.6675
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: .nan
- type: nauc_precision_at_1_std
value: .nan
- type: nauc_precision_at_1_diff1
value: .nan
- type: nauc_precision_at_3_max
value: -5.286
- type: nauc_precision_at_3_std
value: -30.9445
- type: nauc_precision_at_3_diff1
value: -15.7106
- type: nauc_precision_at_5_max
value: 17.227
- type: nauc_precision_at_5_std
value: -22.3411
- type: nauc_precision_at_5_diff1
value: -30.111900000000002
- type: nauc_precision_at_10_max
value: 30.406
- type: nauc_precision_at_10_std
value: -16.0824
- type: nauc_precision_at_10_diff1
value: -9.9285
- type: nauc_precision_at_20_max
value: 21.794900000000002
- type: nauc_precision_at_20_std
value: -0.7081
- type: nauc_precision_at_20_diff1
value: -8.8937
- type: nauc_precision_at_100_max
value: 3.2778
- type: nauc_precision_at_100_std
value: -0.6836
- type: nauc_precision_at_100_diff1
value: 3.6675
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: .nan
- type: nauc_mrr_at_1_std
value: .nan
- type: nauc_mrr_at_1_diff1
value: .nan
- type: nauc_mrr_at_3_max
value: -8.3328
- type: nauc_mrr_at_3_std
value: -33.029399999999995
- type: nauc_mrr_at_3_diff1
value: -20.842299999999998
- type: nauc_mrr_at_5_max
value: 9.694600000000001
- type: nauc_mrr_at_5_std
value: -25.795
- type: nauc_mrr_at_5_diff1
value: -29.718899999999998
- type: nauc_mrr_at_10_max
value: 24.2406
- type: nauc_mrr_at_10_std
value: -19.192899999999998
- type: nauc_mrr_at_10_diff1
value: -16.1405
- type: nauc_mrr_at_20_max
value: 20.515800000000002
- type: nauc_mrr_at_20_std
value: -10.6617
- type: nauc_mrr_at_20_diff1
value: -14.4404
- type: nauc_mrr_at_100_max
value: 17.603099999999998
- type: nauc_mrr_at_100_std
value: -11.405
- type: nauc_mrr_at_100_diff1
value: -9.4802
- type: nauc_mrr_at_1000_max
value: 18.4729
- type: nauc_mrr_at_1000_std
value: -11.7628
- type: nauc_mrr_at_1000_diff1
value: -10.1215
- type: main_score
value: 2.809
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (zho-ara)
type: facebook/mlqa
config: zho-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 1.5959999999999999
- type: ndcg_at_3
value: 2.869
- type: ndcg_at_5
value: 3.3029999999999995
- type: ndcg_at_10
value: 5.124
- type: ndcg_at_20
value: 6.805
- type: ndcg_at_100
value: 14.495
- type: ndcg_at_1000
value: 19.941
- type: map_at_1
value: 1.5959999999999999
- type: map_at_3
value: 2.571
- type: map_at_5
value: 2.81
- type: map_at_10
value: 3.5220000000000002
- type: map_at_20
value: 3.948
- type: map_at_100
value: 4.8309999999999995
- type: map_at_1000
value: 5.128
- type: recall_at_1
value: 1.5959999999999999
- type: recall_at_3
value: 3.723
- type: recall_at_5
value: 4.787
- type: recall_at_10
value: 10.638
- type: recall_at_20
value: 17.553
- type: recall_at_100
value: 61.702
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 1.5959999999999999
- type: precision_at_3
value: 1.2409999999999999
- type: precision_at_5
value: 0.9570000000000001
- type: precision_at_10
value: 1.064
- type: precision_at_20
value: 0.878
- type: precision_at_100
value: 0.617
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 1.5957
- type: mrr_at_3
value: 2.5709
- type: mrr_at_5
value: 2.8103
- type: mrr_at_10
value: 3.5216
- type: mrr_at_20
value: 3.9482999999999997
- type: mrr_at_100
value: 4.8312
- type: mrr_at_1000
value: 5.1277
- type: nauc_ndcg_at_1_max
value: 25.9707
- type: nauc_ndcg_at_1_std
value: 25.9707
- type: nauc_ndcg_at_1_diff1
value: 88.7908
- type: nauc_ndcg_at_3_max
value: 8.0769
- type: nauc_ndcg_at_3_std
value: -1.4973999999999998
- type: nauc_ndcg_at_3_diff1
value: 66.1072
- type: nauc_ndcg_at_5_max
value: 8.4885
- type: nauc_ndcg_at_5_std
value: 1.5889
- type: nauc_ndcg_at_5_diff1
value: 55.131699999999995
- type: nauc_ndcg_at_10_max
value: 4.4135
- type: nauc_ndcg_at_10_std
value: -2.4915
- type: nauc_ndcg_at_10_diff1
value: 40.2008
- type: nauc_ndcg_at_20_max
value: 5.2495
- type: nauc_ndcg_at_20_std
value: -6.4857
- type: nauc_ndcg_at_20_diff1
value: 30.0024
- type: nauc_ndcg_at_100_max
value: 15.6634
- type: nauc_ndcg_at_100_std
value: -2.1768
- type: nauc_ndcg_at_100_diff1
value: 25.4728
- type: nauc_ndcg_at_1000_max
value: 10.8195
- type: nauc_ndcg_at_1000_std
value: -0.9631000000000001
- type: nauc_ndcg_at_1000_diff1
value: 37.1256
- type: nauc_map_at_1_max
value: 25.9707
- type: nauc_map_at_1_std
value: 25.9707
- type: nauc_map_at_1_diff1
value: 88.7908
- type: nauc_map_at_3_max
value: 11.2388
- type: nauc_map_at_3_std
value: 2.7731
- type: nauc_map_at_3_diff1
value: 70.1588
- type: nauc_map_at_5_max
value: 11.5213
- type: nauc_map_at_5_std
value: 4.4621
- type: nauc_map_at_5_diff1
value: 62.586
- type: nauc_map_at_10_max
value: 8.664900000000001
- type: nauc_map_at_10_std
value: 0.9982
- type: nauc_map_at_10_diff1
value: 52.0845
- type: nauc_map_at_20_max
value: 8.7285
- type: nauc_map_at_20_std
value: -0.9410999999999999
- type: nauc_map_at_20_diff1
value: 46.8936
- type: nauc_map_at_100_max
value: 11.1619
- type: nauc_map_at_100_std
value: 0.5134
- type: nauc_map_at_100_diff1
value: 45.5704
- type: nauc_map_at_1000_max
value: 10.7283
- type: nauc_map_at_1000_std
value: 0.6891
- type: nauc_map_at_1000_diff1
value: 47.0302
- type: nauc_recall_at_1_max
value: 25.9707
- type: nauc_recall_at_1_std
value: 25.9707
- type: nauc_recall_at_1_diff1
value: 88.7908
- type: nauc_recall_at_3_max
value: 1.6386999999999998
- type: nauc_recall_at_3_std
value: -10.052
- type: nauc_recall_at_3_diff1
value: 57.8468
- type: nauc_recall_at_5_max
value: 3.0700000000000003
- type: nauc_recall_at_5_std
value: -3.0769
- type: nauc_recall_at_5_diff1
value: 41.4621
- type: nauc_recall_at_10_max
value: -0.44349999999999995
- type: nauc_recall_at_10_std
value: -5.8379
- type: nauc_recall_at_10_diff1
value: 26.6638
- type: nauc_recall_at_20_max
value: 2.3823
- type: nauc_recall_at_20_std
value: -11.5308
- type: nauc_recall_at_20_diff1
value: 13.6577
- type: nauc_recall_at_100_max
value: 24.204600000000003
- type: nauc_recall_at_100_std
value: -4.2306
- type: nauc_recall_at_100_diff1
value: 5.4663
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 25.9707
- type: nauc_precision_at_1_std
value: 25.9707
- type: nauc_precision_at_1_diff1
value: 88.7908
- type: nauc_precision_at_3_max
value: 1.6386999999999998
- type: nauc_precision_at_3_std
value: -10.052
- type: nauc_precision_at_3_diff1
value: 57.8468
- type: nauc_precision_at_5_max
value: 3.0700000000000003
- type: nauc_precision_at_5_std
value: -3.0769
- type: nauc_precision_at_5_diff1
value: 41.4621
- type: nauc_precision_at_10_max
value: -0.44349999999999995
- type: nauc_precision_at_10_std
value: -5.8379
- type: nauc_precision_at_10_diff1
value: 26.6638
- type: nauc_precision_at_20_max
value: 2.3823
- type: nauc_precision_at_20_std
value: -11.5308
- type: nauc_precision_at_20_diff1
value: 13.6577
- type: nauc_precision_at_100_max
value: 24.204600000000003
- type: nauc_precision_at_100_std
value: -4.2306
- type: nauc_precision_at_100_diff1
value: 5.4663
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 25.9707
- type: nauc_mrr_at_1_std
value: 25.9707
- type: nauc_mrr_at_1_diff1
value: 88.7908
- type: nauc_mrr_at_3_max
value: 11.2388
- type: nauc_mrr_at_3_std
value: 2.7731
- type: nauc_mrr_at_3_diff1
value: 70.1588
- type: nauc_mrr_at_5_max
value: 11.5213
- type: nauc_mrr_at_5_std
value: 4.4621
- type: nauc_mrr_at_5_diff1
value: 62.586
- type: nauc_mrr_at_10_max
value: 8.664900000000001
- type: nauc_mrr_at_10_std
value: 0.9982
- type: nauc_mrr_at_10_diff1
value: 52.0845
- type: nauc_mrr_at_20_max
value: 8.7285
- type: nauc_mrr_at_20_std
value: -0.9410999999999999
- type: nauc_mrr_at_20_diff1
value: 46.8936
- type: nauc_mrr_at_100_max
value: 11.1619
- type: nauc_mrr_at_100_std
value: 0.5134
- type: nauc_mrr_at_100_diff1
value: 45.5704
- type: nauc_mrr_at_1000_max
value: 10.7283
- type: nauc_mrr_at_1000_std
value: 0.6891
- type: nauc_mrr_at_1000_diff1
value: 47.0302
- type: main_score
value: 5.124
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-ara)
type: facebook/mlqa
config: ara-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 13.145000000000001
- type: ndcg_at_3
value: 17.358
- type: ndcg_at_5
value: 18.838
- type: ndcg_at_10
value: 20.508000000000003
- type: ndcg_at_20
value: 22.025
- type: ndcg_at_100
value: 24.966
- type: ndcg_at_1000
value: 28.415000000000003
- type: map_at_1
value: 13.135
- type: map_at_3
value: 16.292
- type: map_at_5
value: 17.105999999999998
- type: map_at_10
value: 17.793
- type: map_at_20
value: 18.207
- type: map_at_100
value: 18.590999999999998
- type: map_at_1000
value: 18.698999999999998
- type: recall_at_1
value: 13.135
- type: recall_at_3
value: 20.448
- type: recall_at_5
value: 24.067
- type: recall_at_10
value: 29.242
- type: recall_at_20
value: 35.262
- type: recall_at_100
value: 51.453
- type: recall_at_1000
value: 79.87100000000001
- type: precision_at_1
value: 13.145000000000001
- type: precision_at_3
value: 6.819
- type: precision_at_5
value: 4.8149999999999995
- type: precision_at_10
value: 2.9250000000000003
- type: precision_at_20
value: 1.764
- type: precision_at_100
value: 0.515
- type: precision_at_1000
value: 0.08
- type: mrr_at_1
value: 13.1446
- type: mrr_at_3
value: 16.301
- type: mrr_at_5
value: 17.1158
- type: mrr_at_10
value: 17.802699999999998
- type: mrr_at_20
value: 18.2164
- type: mrr_at_100
value: 18.5997
- type: mrr_at_1000
value: 18.708
- type: nauc_ndcg_at_1_max
value: 54.626
- type: nauc_ndcg_at_1_std
value: 9.7213
- type: nauc_ndcg_at_1_diff1
value: 48.3128
- type: nauc_ndcg_at_3_max
value: 49.8152
- type: nauc_ndcg_at_3_std
value: 10.6486
- type: nauc_ndcg_at_3_diff1
value: 37.6318
- type: nauc_ndcg_at_5_max
value: 49.3946
- type: nauc_ndcg_at_5_std
value: 11.0498
- type: nauc_ndcg_at_5_diff1
value: 36.6375
- type: nauc_ndcg_at_10_max
value: 48.226
- type: nauc_ndcg_at_10_std
value: 11.574900000000001
- type: nauc_ndcg_at_10_diff1
value: 34.591499999999996
- type: nauc_ndcg_at_20_max
value: 47.5075
- type: nauc_ndcg_at_20_std
value: 11.9084
- type: nauc_ndcg_at_20_diff1
value: 33.475300000000004
- type: nauc_ndcg_at_100_max
value: 47.131299999999996
- type: nauc_ndcg_at_100_std
value: 12.7452
- type: nauc_ndcg_at_100_diff1
value: 32.7759
- type: nauc_ndcg_at_1000_max
value: 47.5947
- type: nauc_ndcg_at_1000_std
value: 12.570500000000001
- type: nauc_ndcg_at_1000_diff1
value: 33.3662
- type: nauc_map_at_1_max
value: 54.5764
- type: nauc_map_at_1_std
value: 9.6486
- type: nauc_map_at_1_diff1
value: 48.2862
- type: nauc_map_at_3_max
value: 50.8942
- type: nauc_map_at_3_std
value: 10.4293
- type: nauc_map_at_3_diff1
value: 39.9007
- type: nauc_map_at_5_max
value: 50.61639999999999
- type: nauc_map_at_5_std
value: 10.6779
- type: nauc_map_at_5_diff1
value: 39.2573
- type: nauc_map_at_10_max
value: 50.0815
- type: nauc_map_at_10_std
value: 10.935400000000001
- type: nauc_map_at_10_diff1
value: 38.290400000000005
- type: nauc_map_at_20_max
value: 49.8737
- type: nauc_map_at_20_std
value: 11.0391
- type: nauc_map_at_20_diff1
value: 37.9496
- type: nauc_map_at_100_max
value: 49.7948
- type: nauc_map_at_100_std
value: 11.1509
- type: nauc_map_at_100_diff1
value: 37.8322
- type: nauc_map_at_1000_max
value: 49.818
- type: nauc_map_at_1000_std
value: 11.157300000000001
- type: nauc_map_at_1000_diff1
value: 37.859500000000004
- type: nauc_recall_at_1_max
value: 54.5764
- type: nauc_recall_at_1_std
value: 9.6486
- type: nauc_recall_at_1_diff1
value: 48.2862
- type: nauc_recall_at_3_max
value: 47.1152
- type: nauc_recall_at_3_std
value: 11.1346
- type: nauc_recall_at_3_diff1
value: 32.0666
- type: nauc_recall_at_5_max
value: 46.455600000000004
- type: nauc_recall_at_5_std
value: 11.905100000000001
- type: nauc_recall_at_5_diff1
value: 30.426599999999997
- type: nauc_recall_at_10_max
value: 43.7652
- type: nauc_recall_at_10_std
value: 13.0735
- type: nauc_recall_at_10_diff1
value: 25.9008
- type: nauc_recall_at_20_max
value: 41.6091
- type: nauc_recall_at_20_std
value: 14.041200000000002
- type: nauc_recall_at_20_diff1
value: 22.7051
- type: nauc_recall_at_100_max
value: 40.0424
- type: nauc_recall_at_100_std
value: 17.8576
- type: nauc_recall_at_100_diff1
value: 19.5013
- type: nauc_recall_at_1000_max
value: 39.2051
- type: nauc_recall_at_1000_std
value: 18.9662
- type: nauc_recall_at_1000_diff1
value: 15.2009
- type: nauc_precision_at_1_max
value: 54.626
- type: nauc_precision_at_1_std
value: 9.7213
- type: nauc_precision_at_1_diff1
value: 48.3128
- type: nauc_precision_at_3_max
value: 47.1626
- type: nauc_precision_at_3_std
value: 11.1885
- type: nauc_precision_at_3_diff1
value: 32.0978
- type: nauc_precision_at_5_max
value: 46.5
- type: nauc_precision_at_5_std
value: 11.955300000000001
- type: nauc_precision_at_5_diff1
value: 30.456
- type: nauc_precision_at_10_max
value: 43.8063
- type: nauc_precision_at_10_std
value: 13.1193
- type: nauc_precision_at_10_diff1
value: 25.9284
- type: nauc_precision_at_20_max
value: 41.6532
- type: nauc_precision_at_20_std
value: 14.0865
- type: nauc_precision_at_20_diff1
value: 22.7346
- type: nauc_precision_at_100_max
value: 40.0991
- type: nauc_precision_at_100_std
value: 17.935200000000002
- type: nauc_precision_at_100_diff1
value: 19.545399999999997
- type: nauc_precision_at_1000_max
value: 39.2887
- type: nauc_precision_at_1000_std
value: 19.0859
- type: nauc_precision_at_1000_diff1
value: 15.277
- type: nauc_mrr_at_1_max
value: 54.626
- type: nauc_mrr_at_1_std
value: 9.7213
- type: nauc_mrr_at_1_diff1
value: 48.3128
- type: nauc_mrr_at_3_max
value: 50.938300000000005
- type: nauc_mrr_at_3_std
value: 10.491100000000001
- type: nauc_mrr_at_3_diff1
value: 39.927099999999996
- type: nauc_mrr_at_5_max
value: 50.6598
- type: nauc_mrr_at_5_std
value: 10.7385
- type: nauc_mrr_at_5_diff1
value: 39.2835
- type: nauc_mrr_at_10_max
value: 50.124500000000005
- type: nauc_mrr_at_10_std
value: 10.994900000000001
- type: nauc_mrr_at_10_diff1
value: 38.3166
- type: nauc_mrr_at_20_max
value: 49.9166
- type: nauc_mrr_at_20_std
value: 11.0984
- type: nauc_mrr_at_20_diff1
value: 37.9759
- type: nauc_mrr_at_100_max
value: 49.836200000000005
- type: nauc_mrr_at_100_std
value: 11.2082
- type: nauc_mrr_at_100_diff1
value: 37.8577
- type: nauc_mrr_at_1000_max
value: 49.859500000000004
- type: nauc_mrr_at_1000_std
value: 11.2147
- type: nauc_mrr_at_1000_diff1
value: 37.885000000000005
- type: main_score
value: 20.508000000000003
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-deu)
type: facebook/mlqa
config: ara-deu
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.182
- type: ndcg_at_3
value: 0.358
- type: ndcg_at_5
value: 0.457
- type: ndcg_at_10
value: 0.732
- type: ndcg_at_20
value: 1.065
- type: ndcg_at_100
value: 2.373
- type: ndcg_at_1000
value: 9.254
- type: map_at_1
value: 0.182
- type: map_at_3
value: 0.314
- type: map_at_5
value: 0.368
- type: map_at_10
value: 0.482
- type: map_at_20
value: 0.5720000000000001
- type: map_at_100
value: 0.7250000000000001
- type: map_at_1000
value: 0.889
- type: recall_at_1
value: 0.182
- type: recall_at_3
value: 0.485
- type: recall_at_5
value: 0.728
- type: recall_at_10
value: 1.5779999999999998
- type: recall_at_20
value: 2.913
- type: recall_at_100
value: 10.376000000000001
- type: recall_at_1000
value: 70.419
- type: precision_at_1
value: 0.182
- type: precision_at_3
value: 0.16199999999999998
- type: precision_at_5
value: 0.146
- type: precision_at_10
value: 0.158
- type: precision_at_20
value: 0.146
- type: precision_at_100
value: 0.104
- type: precision_at_1000
value: 0.06999999999999999
- type: mrr_at_1
value: 0.182
- type: mrr_at_3
value: 0.3135
- type: mrr_at_5
value: 0.3681
- type: mrr_at_10
value: 0.4821
- type: mrr_at_20
value: 0.5716
- type: mrr_at_100
value: 0.7255
- type: mrr_at_1000
value: 0.8887
- type: nauc_ndcg_at_1_max
value: 28.624699999999997
- type: nauc_ndcg_at_1_std
value: 6.1873
- type: nauc_ndcg_at_1_diff1
value: 53.0501
- type: nauc_ndcg_at_3_max
value: 3.8078000000000003
- type: nauc_ndcg_at_3_std
value: 2.7539000000000002
- type: nauc_ndcg_at_3_diff1
value: 22.1103
- type: nauc_ndcg_at_5_max
value: 0.6967
- type: nauc_ndcg_at_5_std
value: 1.5486
- type: nauc_ndcg_at_5_diff1
value: 11.990499999999999
- type: nauc_ndcg_at_10_max
value: 0.2519
- type: nauc_ndcg_at_10_std
value: -1.0728
- type: nauc_ndcg_at_10_diff1
value: 0.755
- type: nauc_ndcg_at_20_max
value: -1.6757000000000002
- type: nauc_ndcg_at_20_std
value: -0.3161
- type: nauc_ndcg_at_20_diff1
value: 4.1878
- type: nauc_ndcg_at_100_max
value: -2.2508
- type: nauc_ndcg_at_100_std
value: -5.1434
- type: nauc_ndcg_at_100_diff1
value: -0.15410000000000001
- type: nauc_ndcg_at_1000_max
value: -5.904
- type: nauc_ndcg_at_1000_std
value: -5.141
- type: nauc_ndcg_at_1000_diff1
value: -4.047
- type: nauc_map_at_1_max
value: 28.624699999999997
- type: nauc_map_at_1_std
value: 6.1873
- type: nauc_map_at_1_diff1
value: 53.0501
- type: nauc_map_at_3_max
value: 7.9022
- type: nauc_map_at_3_std
value: 3.8733999999999997
- type: nauc_map_at_3_diff1
value: 27.1528
- type: nauc_map_at_5_max
value: 5.4552000000000005
- type: nauc_map_at_5_std
value: 2.6903
- type: nauc_map_at_5_diff1
value: 19.6651
- type: nauc_map_at_10_max
value: 3.7626
- type: nauc_map_at_10_std
value: 0.9359
- type: nauc_map_at_10_diff1
value: 10.467799999999999
- type: nauc_map_at_20_max
value: 2.3636
- type: nauc_map_at_20_std
value: 1.0025
- type: nauc_map_at_20_diff1
value: 10.8077
- type: nauc_map_at_100_max
value: 0.5793999999999999
- type: nauc_map_at_100_std
value: -1.1226999999999998
- type: nauc_map_at_100_diff1
value: 7.180400000000001
- type: nauc_map_at_1000_max
value: -0.1581
- type: nauc_map_at_1000_std
value: -1.7341
- type: nauc_map_at_1000_diff1
value: 6.1155
- type: nauc_recall_at_1_max
value: 28.624699999999997
- type: nauc_recall_at_1_std
value: 6.1873
- type: nauc_recall_at_1_diff1
value: 53.0501
- type: nauc_recall_at_3_max
value: -3.9881
- type: nauc_recall_at_3_std
value: 0.4971
- type: nauc_recall_at_3_diff1
value: 12.523000000000001
- type: nauc_recall_at_5_max
value: -6.7618
- type: nauc_recall_at_5_std
value: -0.19449999999999998
- type: nauc_recall_at_5_diff1
value: -0.1727
- type: nauc_recall_at_10_max
value: -2.9286
- type: nauc_recall_at_10_std
value: -3.2508000000000004
- type: nauc_recall_at_10_diff1
value: -9.1922
- type: nauc_recall_at_20_max
value: -4.4579
- type: nauc_recall_at_20_std
value: -1.1248
- type: nauc_recall_at_20_diff1
value: 0.1875
- type: nauc_recall_at_100_max
value: -2.4858000000000002
- type: nauc_recall_at_100_std
value: -6.912999999999999
- type: nauc_recall_at_100_diff1
value: -2.0854
- type: nauc_recall_at_1000_max
value: -8.0511
- type: nauc_recall_at_1000_std
value: -5.1655
- type: nauc_recall_at_1000_diff1
value: -7.4412
- type: nauc_precision_at_1_max
value: 28.624699999999997
- type: nauc_precision_at_1_std
value: 6.1873
- type: nauc_precision_at_1_diff1
value: 53.0501
- type: nauc_precision_at_3_max
value: -3.9881
- type: nauc_precision_at_3_std
value: 0.4971
- type: nauc_precision_at_3_diff1
value: 12.523000000000001
- type: nauc_precision_at_5_max
value: -6.7618
- type: nauc_precision_at_5_std
value: -0.19449999999999998
- type: nauc_precision_at_5_diff1
value: -0.1727
- type: nauc_precision_at_10_max
value: -2.9286
- type: nauc_precision_at_10_std
value: -3.2508000000000004
- type: nauc_precision_at_10_diff1
value: -9.1922
- type: nauc_precision_at_20_max
value: -4.4579
- type: nauc_precision_at_20_std
value: -1.1248
- type: nauc_precision_at_20_diff1
value: 0.1875
- type: nauc_precision_at_100_max
value: -2.4858000000000002
- type: nauc_precision_at_100_std
value: -6.912999999999999
- type: nauc_precision_at_100_diff1
value: -2.0854
- type: nauc_precision_at_1000_max
value: -8.1766
- type: nauc_precision_at_1000_std
value: -5.273
- type: nauc_precision_at_1000_diff1
value: -7.5506
- type: nauc_mrr_at_1_max
value: 28.624699999999997
- type: nauc_mrr_at_1_std
value: 6.1873
- type: nauc_mrr_at_1_diff1
value: 53.0501
- type: nauc_mrr_at_3_max
value: 7.9022
- type: nauc_mrr_at_3_std
value: 3.8733999999999997
- type: nauc_mrr_at_3_diff1
value: 27.1528
- type: nauc_mrr_at_5_max
value: 5.4552000000000005
- type: nauc_mrr_at_5_std
value: 2.6903
- type: nauc_mrr_at_5_diff1
value: 19.6651
- type: nauc_mrr_at_10_max
value: 3.7626
- type: nauc_mrr_at_10_std
value: 0.9359
- type: nauc_mrr_at_10_diff1
value: 10.467799999999999
- type: nauc_mrr_at_20_max
value: 2.3636
- type: nauc_mrr_at_20_std
value: 1.0025
- type: nauc_mrr_at_20_diff1
value: 10.8077
- type: nauc_mrr_at_100_max
value: 0.5793999999999999
- type: nauc_mrr_at_100_std
value: -1.1226999999999998
- type: nauc_mrr_at_100_diff1
value: 7.180400000000001
- type: nauc_mrr_at_1000_max
value: -0.1628
- type: nauc_mrr_at_1000_std
value: -1.7382000000000002
- type: nauc_mrr_at_1000_diff1
value: 6.1114
- type: main_score
value: 0.732
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-eng)
type: facebook/mlqa
config: ara-eng
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.038
- type: ndcg_at_3
value: 0.13899999999999998
- type: ndcg_at_5
value: 0.23700000000000002
- type: ndcg_at_10
value: 0.31
- type: ndcg_at_20
value: 0.439
- type: ndcg_at_100
value: 1.061
- type: ndcg_at_1000
value: 3.857
- type: map_at_1
value: 0.038
- type: map_at_3
value: 0.109
- type: map_at_5
value: 0.163
- type: map_at_10
value: 0.193
- type: map_at_20
value: 0.22899999999999998
- type: map_at_100
value: 0.306
- type: map_at_1000
value: 0.373
- type: recall_at_1
value: 0.038
- type: recall_at_3
value: 0.22499999999999998
- type: recall_at_5
value: 0.469
- type: recall_at_10
value: 0.694
- type: recall_at_20
value: 1.2
- type: recall_at_100
value: 4.689
- type: recall_at_1000
value: 29.060000000000002
- type: precision_at_1
value: 0.038
- type: precision_at_3
value: 0.075
- type: precision_at_5
value: 0.094
- type: precision_at_10
value: 0.06899999999999999
- type: precision_at_20
value: 0.06
- type: precision_at_100
value: 0.047
- type: precision_at_1000
value: 0.029
- type: mrr_at_1
value: 0.0375
- type: mrr_at_3
value: 0.1094
- type: mrr_at_5
value: 0.1629
- type: mrr_at_10
value: 0.19319999999999998
- type: mrr_at_20
value: 0.2287
- type: mrr_at_100
value: 0.3061
- type: mrr_at_1000
value: 0.373
- type: nauc_ndcg_at_1_max
value: 25.0247
- type: nauc_ndcg_at_1_std
value: 100.0
- type: nauc_ndcg_at_1_diff1
value: 21.269099999999998
- type: nauc_ndcg_at_3_max
value: -2.6221
- type: nauc_ndcg_at_3_std
value: 58.781499999999994
- type: nauc_ndcg_at_3_diff1
value: -8.5801
- type: nauc_ndcg_at_5_max
value: 11.3108
- type: nauc_ndcg_at_5_std
value: 52.609300000000005
- type: nauc_ndcg_at_5_diff1
value: -1.0551
- type: nauc_ndcg_at_10_max
value: 16.031000000000002
- type: nauc_ndcg_at_10_std
value: 45.3023
- type: nauc_ndcg_at_10_diff1
value: 5.7653
- type: nauc_ndcg_at_20_max
value: 9.3925
- type: nauc_ndcg_at_20_std
value: 30.537799999999997
- type: nauc_ndcg_at_20_diff1
value: 0.9148999999999999
- type: nauc_ndcg_at_100_max
value: 2.9912
- type: nauc_ndcg_at_100_std
value: 18.066499999999998
- type: nauc_ndcg_at_100_diff1
value: -4.87
- type: nauc_ndcg_at_1000_max
value: 3.5232
- type: nauc_ndcg_at_1000_std
value: 9.6114
- type: nauc_ndcg_at_1000_diff1
value: -2.5008
- type: nauc_map_at_1_max
value: 25.0247
- type: nauc_map_at_1_std
value: 100.0
- type: nauc_map_at_1_diff1
value: 21.269099999999998
- type: nauc_map_at_3_max
value: -0.7981
- type: nauc_map_at_3_std
value: 64.2546
- type: nauc_map_at_3_diff1
value: -6.6277
- type: nauc_map_at_5_max
value: 9.6297
- type: nauc_map_at_5_std
value: 57.415000000000006
- type: nauc_map_at_5_diff1
value: -1.5141
- type: nauc_map_at_10_max
value: 12.7673
- type: nauc_map_at_10_std
value: 51.8795
- type: nauc_map_at_10_diff1
value: 3.0726
- type: nauc_map_at_20_max
value: 9.911399999999999
- type: nauc_map_at_20_std
value: 43.0182
- type: nauc_map_at_20_diff1
value: 1.046
- type: nauc_map_at_100_max
value: 6.8581
- type: nauc_map_at_100_std
value: 35.2906
- type: nauc_map_at_100_diff1
value: -1.5436999999999999
- type: nauc_map_at_1000_max
value: 6.7394
- type: nauc_map_at_1000_std
value: 31.183
- type: nauc_map_at_1000_diff1
value: -1.4350999999999998
- type: nauc_recall_at_1_max
value: 25.0247
- type: nauc_recall_at_1_std
value: 100.0
- type: nauc_recall_at_1_diff1
value: 21.269099999999998
- type: nauc_recall_at_3_max
value: -5.088
- type: nauc_recall_at_3_std
value: 50.689099999999996
- type: nauc_recall_at_3_diff1
value: -11.2155
- type: nauc_recall_at_5_max
value: 13.6279
- type: nauc_recall_at_5_std
value: 47.4024
- type: nauc_recall_at_5_diff1
value: -0.1403
- type: nauc_recall_at_10_max
value: 19.7762
- type: nauc_recall_at_10_std
value: 38.9053
- type: nauc_recall_at_10_diff1
value: 9.001199999999999
- type: nauc_recall_at_20_max
value: 8.4134
- type: nauc_recall_at_20_std
value: 20.3737
- type: nauc_recall_at_20_diff1
value: 0.4812
- type: nauc_recall_at_100_max
value: 1.1665999999999999
- type: nauc_recall_at_100_std
value: 11.3664
- type: nauc_recall_at_100_diff1
value: -6.5212
- type: nauc_recall_at_1000_max
value: 2.8707
- type: nauc_recall_at_1000_std
value: 5.8485000000000005
- type: nauc_recall_at_1000_diff1
value: -2.4025000000000003
- type: nauc_precision_at_1_max
value: 25.0247
- type: nauc_precision_at_1_std
value: 100.0
- type: nauc_precision_at_1_diff1
value: 21.269099999999998
- type: nauc_precision_at_3_max
value: -5.088
- type: nauc_precision_at_3_std
value: 50.689099999999996
- type: nauc_precision_at_3_diff1
value: -11.2155
- type: nauc_precision_at_5_max
value: 13.6279
- type: nauc_precision_at_5_std
value: 47.4024
- type: nauc_precision_at_5_diff1
value: -0.1403
- type: nauc_precision_at_10_max
value: 19.7762
- type: nauc_precision_at_10_std
value: 38.9053
- type: nauc_precision_at_10_diff1
value: 9.001199999999999
- type: nauc_precision_at_20_max
value: 8.4134
- type: nauc_precision_at_20_std
value: 20.3737
- type: nauc_precision_at_20_diff1
value: 0.4812
- type: nauc_precision_at_100_max
value: 1.1665999999999999
- type: nauc_precision_at_100_std
value: 11.3664
- type: nauc_precision_at_100_diff1
value: -6.5212
- type: nauc_precision_at_1000_max
value: 2.8549
- type: nauc_precision_at_1000_std
value: 5.8442
- type: nauc_precision_at_1000_diff1
value: -2.3865999999999996
- type: nauc_mrr_at_1_max
value: 25.0247
- type: nauc_mrr_at_1_std
value: 100.0
- type: nauc_mrr_at_1_diff1
value: 21.269099999999998
- type: nauc_mrr_at_3_max
value: -0.7981
- type: nauc_mrr_at_3_std
value: 64.2546
- type: nauc_mrr_at_3_diff1
value: -6.6277
- type: nauc_mrr_at_5_max
value: 9.6297
- type: nauc_mrr_at_5_std
value: 57.415000000000006
- type: nauc_mrr_at_5_diff1
value: -1.5141
- type: nauc_mrr_at_10_max
value: 12.7673
- type: nauc_mrr_at_10_std
value: 51.8795
- type: nauc_mrr_at_10_diff1
value: 3.0726
- type: nauc_mrr_at_20_max
value: 9.911399999999999
- type: nauc_mrr_at_20_std
value: 43.0182
- type: nauc_mrr_at_20_diff1
value: 1.046
- type: nauc_mrr_at_100_max
value: 6.8581
- type: nauc_mrr_at_100_std
value: 35.2906
- type: nauc_mrr_at_100_diff1
value: -1.5436999999999999
- type: nauc_mrr_at_1000_max
value: 6.7368999999999994
- type: nauc_mrr_at_1000_std
value: 31.181199999999997
- type: nauc_mrr_at_1000_diff1
value: -1.4328
- type: main_score
value: 0.31
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-spa)
type: facebook/mlqa
config: ara-spa
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.051000000000000004
- type: ndcg_at_3
value: 0.19
- type: ndcg_at_5
value: 0.22899999999999998
- type: ndcg_at_10
value: 0.43
- type: ndcg_at_20
value: 0.668
- type: ndcg_at_100
value: 1.687
- type: ndcg_at_1000
value: 7.878
- type: map_at_1
value: 0.051000000000000004
- type: map_at_3
value: 0.152
- type: map_at_5
value: 0.172
- type: map_at_10
value: 0.258
- type: map_at_20
value: 0.32
- type: map_at_100
value: 0.44400000000000006
- type: map_at_1000
value: 0.592
- type: recall_at_1
value: 0.051000000000000004
- type: recall_at_3
value: 0.303
- type: recall_at_5
value: 0.404
- type: recall_at_10
value: 1.011
- type: recall_at_20
value: 1.9720000000000002
- type: recall_at_100
value: 7.735
- type: recall_at_1000
value: 61.729
- type: precision_at_1
value: 0.051000000000000004
- type: precision_at_3
value: 0.101
- type: precision_at_5
value: 0.08099999999999999
- type: precision_at_10
value: 0.101
- type: precision_at_20
value: 0.099
- type: precision_at_100
value: 0.077
- type: precision_at_1000
value: 0.062
- type: mrr_at_1
value: 0.050600000000000006
- type: mrr_at_3
value: 0.1517
- type: mrr_at_5
value: 0.1719
- type: mrr_at_10
value: 0.2578
- type: mrr_at_20
value: 0.3199
- type: mrr_at_100
value: 0.44409999999999994
- type: mrr_at_1000
value: 0.5918
- type: nauc_ndcg_at_1_max
value: 66.2097
- type: nauc_ndcg_at_1_std
value: 66.2097
- type: nauc_ndcg_at_1_diff1
value: 32.419399999999996
- type: nauc_ndcg_at_3_max
value: -3.5048000000000004
- type: nauc_ndcg_at_3_std
value: -1.1603
- type: nauc_ndcg_at_3_diff1
value: 4.6897
- type: nauc_ndcg_at_5_max
value: -9.5677
- type: nauc_ndcg_at_5_std
value: 7.449999999999999
- type: nauc_ndcg_at_5_diff1
value: -5.919300000000001
- type: nauc_ndcg_at_10_max
value: -4.8053
- type: nauc_ndcg_at_10_std
value: 13.3414
- type: nauc_ndcg_at_10_diff1
value: -5.1068
- type: nauc_ndcg_at_20_max
value: -2.2846
- type: nauc_ndcg_at_20_std
value: 7.589700000000001
- type: nauc_ndcg_at_20_diff1
value: -2.1516
- type: nauc_ndcg_at_100_max
value: 1.1325999999999998
- type: nauc_ndcg_at_100_std
value: 3.0970999999999997
- type: nauc_ndcg_at_100_diff1
value: 1.9342000000000001
- type: nauc_ndcg_at_1000_max
value: 0.7024
- type: nauc_ndcg_at_1000_std
value: 4.9341
- type: nauc_ndcg_at_1000_diff1
value: 2.2851
- type: nauc_map_at_1_max
value: 66.2097
- type: nauc_map_at_1_std
value: 66.2097
- type: nauc_map_at_1_diff1
value: 32.419399999999996
- type: nauc_map_at_3_max
value: 1.5827
- type: nauc_map_at_3_std
value: 3.7415
- type: nauc_map_at_3_diff1
value: 6.6845
- type: nauc_map_at_5_max
value: -3.1972
- type: nauc_map_at_5_std
value: 9.103
- type: nauc_map_at_5_diff1
value: -0.8668
- type: nauc_map_at_10_max
value: -2.1843000000000004
- type: nauc_map_at_10_std
value: 12.824399999999999
- type: nauc_map_at_10_diff1
value: -2.0369
- type: nauc_map_at_20_max
value: -1.4794
- type: nauc_map_at_20_std
value: 9.4729
- type: nauc_map_at_20_diff1
value: -0.8819
- type: nauc_map_at_100_max
value: -0.0817
- type: nauc_map_at_100_std
value: 7.3338
- type: nauc_map_at_100_diff1
value: 1.1033
- type: nauc_map_at_1000_max
value: -0.4769
- type: nauc_map_at_1000_std
value: 6.927
- type: nauc_map_at_1000_diff1
value: 0.9951
- type: nauc_recall_at_1_max
value: 66.2097
- type: nauc_recall_at_1_std
value: 66.2097
- type: nauc_recall_at_1_diff1
value: 32.419399999999996
- type: nauc_recall_at_3_max
value: -10.7387
- type: nauc_recall_at_3_std
value: -8.126999999999999
- type: nauc_recall_at_3_diff1
value: 1.8596000000000001
- type: nauc_recall_at_5_max
value: -17.8157
- type: nauc_recall_at_5_std
value: 6.2334
- type: nauc_recall_at_5_diff1
value: -12.9807
- type: nauc_recall_at_10_max
value: -6.397899999999999
- type: nauc_recall_at_10_std
value: 14.4229
- type: nauc_recall_at_10_diff1
value: -7.5951
- type: nauc_recall_at_20_max
value: -1.9718
- type: nauc_recall_at_20_std
value: 6.3748
- type: nauc_recall_at_20_diff1
value: -2.4903999999999997
- type: nauc_recall_at_100_max
value: 1.9014
- type: nauc_recall_at_100_std
value: 1.3683
- type: nauc_recall_at_100_diff1
value: 2.3786
- type: nauc_recall_at_1000_max
value: 1.6191
- type: nauc_recall_at_1000_std
value: 5.3927000000000005
- type: nauc_recall_at_1000_diff1
value: 3.0677
- type: nauc_precision_at_1_max
value: 66.2097
- type: nauc_precision_at_1_std
value: 66.2097
- type: nauc_precision_at_1_diff1
value: 32.419399999999996
- type: nauc_precision_at_3_max
value: -10.7387
- type: nauc_precision_at_3_std
value: -8.126999999999999
- type: nauc_precision_at_3_diff1
value: 1.8596000000000001
- type: nauc_precision_at_5_max
value: -17.8157
- type: nauc_precision_at_5_std
value: 6.2334
- type: nauc_precision_at_5_diff1
value: -12.9807
- type: nauc_precision_at_10_max
value: -6.397899999999999
- type: nauc_precision_at_10_std
value: 14.4229
- type: nauc_precision_at_10_diff1
value: -7.5951
- type: nauc_precision_at_20_max
value: -1.9718
- type: nauc_precision_at_20_std
value: 6.3748
- type: nauc_precision_at_20_diff1
value: -2.4903999999999997
- type: nauc_precision_at_100_max
value: 1.9014
- type: nauc_precision_at_100_std
value: 1.3683
- type: nauc_precision_at_100_diff1
value: 2.3786
- type: nauc_precision_at_1000_max
value: 1.6191
- type: nauc_precision_at_1000_std
value: 5.3927000000000005
- type: nauc_precision_at_1000_diff1
value: 3.0677
- type: nauc_mrr_at_1_max
value: 66.2097
- type: nauc_mrr_at_1_std
value: 66.2097
- type: nauc_mrr_at_1_diff1
value: 32.419399999999996
- type: nauc_mrr_at_3_max
value: 1.5827
- type: nauc_mrr_at_3_std
value: 3.7415
- type: nauc_mrr_at_3_diff1
value: 6.6845
- type: nauc_mrr_at_5_max
value: -3.1972
- type: nauc_mrr_at_5_std
value: 9.103
- type: nauc_mrr_at_5_diff1
value: -0.8668
- type: nauc_mrr_at_10_max
value: -2.1843000000000004
- type: nauc_mrr_at_10_std
value: 12.824399999999999
- type: nauc_mrr_at_10_diff1
value: -2.0369
- type: nauc_mrr_at_20_max
value: -1.4794
- type: nauc_mrr_at_20_std
value: 9.4729
- type: nauc_mrr_at_20_diff1
value: -0.8819
- type: nauc_mrr_at_100_max
value: -0.0817
- type: nauc_mrr_at_100_std
value: 7.3338
- type: nauc_mrr_at_100_diff1
value: 1.1033
- type: nauc_mrr_at_1000_max
value: -0.4769
- type: nauc_mrr_at_1000_std
value: 6.927
- type: nauc_mrr_at_1000_diff1
value: 0.9951
- type: main_score
value: 0.43
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-hin)
type: facebook/mlqa
config: ara-hin
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.218
- type: ndcg_at_3
value: 0.322
- type: ndcg_at_5
value: 0.38999999999999996
- type: ndcg_at_10
value: 0.7230000000000001
- type: ndcg_at_20
value: 1.004
- type: ndcg_at_100
value: 2.493
- type: ndcg_at_1000
value: 9.104
- type: map_at_1
value: 0.218
- type: map_at_3
value: 0.3
- type: map_at_5
value: 0.33899999999999997
- type: map_at_10
value: 0.475
- type: map_at_20
value: 0.547
- type: map_at_100
value: 0.7250000000000001
- type: map_at_1000
value: 0.8829999999999999
- type: recall_at_1
value: 0.218
- type: recall_at_3
value: 0.382
- type: recall_at_5
value: 0.5459999999999999
- type: recall_at_10
value: 1.584
- type: recall_at_20
value: 2.731
- type: recall_at_100
value: 11.196
- type: recall_at_1000
value: 68.815
- type: precision_at_1
value: 0.218
- type: precision_at_3
value: 0.127
- type: precision_at_5
value: 0.109
- type: precision_at_10
value: 0.158
- type: precision_at_20
value: 0.13699999999999998
- type: precision_at_100
value: 0.11199999999999999
- type: precision_at_1000
value: 0.06899999999999999
- type: mrr_at_1
value: 0.2185
- type: mrr_at_3
value: 0.3004
- type: mrr_at_5
value: 0.3386
- type: mrr_at_10
value: 0.4749
- type: mrr_at_20
value: 0.547
- type: mrr_at_100
value: 0.7244999999999999
- type: mrr_at_1000
value: 0.8832
- type: nauc_ndcg_at_1_max
value: 12.828800000000001
- type: nauc_ndcg_at_1_std
value: 12.828800000000001
- type: nauc_ndcg_at_1_diff1
value: 11.947199999999999
- type: nauc_ndcg_at_3_max
value: 12.5981
- type: nauc_ndcg_at_3_std
value: 21.1562
- type: nauc_ndcg_at_3_diff1
value: 9.2582
- type: nauc_ndcg_at_5_max
value: 14.901800000000001
- type: nauc_ndcg_at_5_std
value: 18.6988
- type: nauc_ndcg_at_5_diff1
value: 14.119000000000002
- type: nauc_ndcg_at_10_max
value: -0.8004000000000001
- type: nauc_ndcg_at_10_std
value: 7.9477
- type: nauc_ndcg_at_10_diff1
value: 2.8608000000000002
- type: nauc_ndcg_at_20_max
value: 0.4824
- type: nauc_ndcg_at_20_std
value: 11.9344
- type: nauc_ndcg_at_20_diff1
value: -4.9617
- type: nauc_ndcg_at_100_max
value: 3.257
- type: nauc_ndcg_at_100_std
value: 3.4608
- type: nauc_ndcg_at_100_diff1
value: 5.3857
- type: nauc_ndcg_at_1000_max
value: -2.4372000000000003
- type: nauc_ndcg_at_1000_std
value: -1.0752
- type: nauc_ndcg_at_1000_diff1
value: 2.1543
- type: nauc_map_at_1_max
value: 12.828800000000001
- type: nauc_map_at_1_std
value: 12.828800000000001
- type: nauc_map_at_1_diff1
value: 11.947199999999999
- type: nauc_map_at_3_max
value: 12.6329
- type: nauc_map_at_3_std
value: 19.8994
- type: nauc_map_at_3_diff1
value: 9.664
- type: nauc_map_at_5_max
value: 14.0908
- type: nauc_map_at_5_std
value: 18.2199
- type: nauc_map_at_5_diff1
value: 12.865699999999999
- type: nauc_map_at_10_max
value: 4.3515999999999995
- type: nauc_map_at_10_std
value: 11.3301
- type: nauc_map_at_10_diff1
value: 6.399000000000001
- type: nauc_map_at_20_max
value: 3.9482999999999997
- type: nauc_map_at_20_std
value: 12.4301
- type: nauc_map_at_20_diff1
value: 2.2731000000000003
- type: nauc_map_at_100_max
value: 4.5962000000000005
- type: nauc_map_at_100_std
value: 8.9138
- type: nauc_map_at_100_diff1
value: 4.7346
- type: nauc_map_at_1000_max
value: 3.7624999999999997
- type: nauc_map_at_1000_std
value: 7.8308
- type: nauc_map_at_1000_diff1
value: 4.3517
- type: nauc_recall_at_1_max
value: 12.828800000000001
- type: nauc_recall_at_1_std
value: 12.828800000000001
- type: nauc_recall_at_1_diff1
value: 11.947199999999999
- type: nauc_recall_at_3_max
value: 12.520999999999999
- type: nauc_recall_at_3_std
value: 23.9397
- type: nauc_recall_at_3_diff1
value: 8.3594
- type: nauc_recall_at_5_max
value: 16.5653
- type: nauc_recall_at_5_std
value: 19.4884
- type: nauc_recall_at_5_diff1
value: 16.6947
- type: nauc_recall_at_10_max
value: -6.5468
- type: nauc_recall_at_10_std
value: 4.1849
- type: nauc_recall_at_10_diff1
value: -1.2863
- type: nauc_recall_at_20_max
value: -1.7106
- type: nauc_recall_at_20_std
value: 12.2516
- type: nauc_recall_at_20_diff1
value: -11.3388
- type: nauc_recall_at_100_max
value: 3.1510000000000002
- type: nauc_recall_at_100_std
value: 1.1705
- type: nauc_recall_at_100_diff1
value: 6.681900000000001
- type: nauc_recall_at_1000_max
value: -6.5283999999999995
- type: nauc_recall_at_1000_std
value: -5.6811
- type: nauc_recall_at_1000_diff1
value: 0.9051999999999999
- type: nauc_precision_at_1_max
value: 12.828800000000001
- type: nauc_precision_at_1_std
value: 12.828800000000001
- type: nauc_precision_at_1_diff1
value: 11.947199999999999
- type: nauc_precision_at_3_max
value: 12.520999999999999
- type: nauc_precision_at_3_std
value: 23.9397
- type: nauc_precision_at_3_diff1
value: 8.3594
- type: nauc_precision_at_5_max
value: 16.5653
- type: nauc_precision_at_5_std
value: 19.4884
- type: nauc_precision_at_5_diff1
value: 16.6947
- type: nauc_precision_at_10_max
value: -6.5468
- type: nauc_precision_at_10_std
value: 4.1849
- type: nauc_precision_at_10_diff1
value: -1.2863
- type: nauc_precision_at_20_max
value: -1.7106
- type: nauc_precision_at_20_std
value: 12.2516
- type: nauc_precision_at_20_diff1
value: -11.3388
- type: nauc_precision_at_100_max
value: 3.1510000000000002
- type: nauc_precision_at_100_std
value: 1.1705
- type: nauc_precision_at_100_diff1
value: 6.681900000000001
- type: nauc_precision_at_1000_max
value: -6.5283999999999995
- type: nauc_precision_at_1000_std
value: -5.6811
- type: nauc_precision_at_1000_diff1
value: 0.9051999999999999
- type: nauc_mrr_at_1_max
value: 12.828800000000001
- type: nauc_mrr_at_1_std
value: 12.828800000000001
- type: nauc_mrr_at_1_diff1
value: 11.947199999999999
- type: nauc_mrr_at_3_max
value: 12.6329
- type: nauc_mrr_at_3_std
value: 19.8994
- type: nauc_mrr_at_3_diff1
value: 9.664
- type: nauc_mrr_at_5_max
value: 14.0908
- type: nauc_mrr_at_5_std
value: 18.2199
- type: nauc_mrr_at_5_diff1
value: 12.865699999999999
- type: nauc_mrr_at_10_max
value: 4.3515999999999995
- type: nauc_mrr_at_10_std
value: 11.3301
- type: nauc_mrr_at_10_diff1
value: 6.399000000000001
- type: nauc_mrr_at_20_max
value: 3.9482999999999997
- type: nauc_mrr_at_20_std
value: 12.4301
- type: nauc_mrr_at_20_diff1
value: 2.2731000000000003
- type: nauc_mrr_at_100_max
value: 4.5962000000000005
- type: nauc_mrr_at_100_std
value: 8.9138
- type: nauc_mrr_at_100_diff1
value: 4.7346
- type: nauc_mrr_at_1000_max
value: 3.7624999999999997
- type: nauc_mrr_at_1000_std
value: 7.8308
- type: nauc_mrr_at_1000_diff1
value: 4.3517
- type: main_score
value: 0.7230000000000001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-vie)
type: facebook/mlqa
config: ara-vie
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.098
- type: ndcg_at_3
value: 0.22
- type: ndcg_at_5
value: 0.304
- type: ndcg_at_10
value: 0.46499999999999997
- type: ndcg_at_20
value: 0.673
- type: ndcg_at_100
value: 1.469
- type: ndcg_at_1000
value: 7.327999999999999
- type: map_at_1
value: 0.098
- type: map_at_3
value: 0.179
- type: map_at_5
value: 0.22799999999999998
- type: map_at_10
value: 0.296
- type: map_at_20
value: 0.35300000000000004
- type: map_at_100
value: 0.44799999999999995
- type: map_at_1000
value: 0.584
- type: recall_at_1
value: 0.098
- type: recall_at_3
value: 0.34199999999999997
- type: recall_at_5
value: 0.5369999999999999
- type: recall_at_10
value: 1.026
- type: recall_at_20
value: 1.856
- type: recall_at_100
value: 6.351
- type: recall_at_1000
value: 57.694
- type: precision_at_1
value: 0.098
- type: precision_at_3
value: 0.11399999999999999
- type: precision_at_5
value: 0.107
- type: precision_at_10
value: 0.10300000000000001
- type: precision_at_20
value: 0.093
- type: precision_at_100
value: 0.064
- type: precision_at_1000
value: 0.058
- type: mrr_at_1
value: 0.0977
- type: mrr_at_3
value: 0.1791
- type: mrr_at_5
value: 0.22799999999999998
- type: mrr_at_10
value: 0.29650000000000004
- type: mrr_at_20
value: 0.3525
- type: mrr_at_100
value: 0.4483
- type: mrr_at_1000
value: 0.5842
- type: nauc_ndcg_at_1_max
value: -39.0297
- type: nauc_ndcg_at_1_std
value: -45.7382
- type: nauc_ndcg_at_1_diff1
value: -8.7843
- type: nauc_ndcg_at_3_max
value: -24.9691
- type: nauc_ndcg_at_3_std
value: -11.2432
- type: nauc_ndcg_at_3_diff1
value: -27.354
- type: nauc_ndcg_at_5_max
value: -22.1604
- type: nauc_ndcg_at_5_std
value: -11.8447
- type: nauc_ndcg_at_5_diff1
value: -6.9122
- type: nauc_ndcg_at_10_max
value: -23.735
- type: nauc_ndcg_at_10_std
value: -15.4924
- type: nauc_ndcg_at_10_diff1
value: -10.152999999999999
- type: nauc_ndcg_at_20_max
value: -20.741699999999998
- type: nauc_ndcg_at_20_std
value: -13.452300000000001
- type: nauc_ndcg_at_20_diff1
value: -12.496599999999999
- type: nauc_ndcg_at_100_max
value: -10.9657
- type: nauc_ndcg_at_100_std
value: -8.015500000000001
- type: nauc_ndcg_at_100_diff1
value: -4.9342999999999995
- type: nauc_ndcg_at_1000_max
value: -7.3108
- type: nauc_ndcg_at_1000_std
value: -7.736800000000001
- type: nauc_ndcg_at_1000_diff1
value: -5.5809
- type: nauc_map_at_1_max
value: -39.0297
- type: nauc_map_at_1_std
value: -45.7382
- type: nauc_map_at_1_diff1
value: -8.7843
- type: nauc_map_at_3_max
value: -27.5256
- type: nauc_map_at_3_std
value: -17.515
- type: nauc_map_at_3_diff1
value: -23.9777
- type: nauc_map_at_5_max
value: -24.8037
- type: nauc_map_at_5_std
value: -16.636699999999998
- type: nauc_map_at_5_diff1
value: -8.8785
- type: nauc_map_at_10_max
value: -25.373800000000003
- type: nauc_map_at_10_std
value: -17.8539
- type: nauc_map_at_10_diff1
value: -11.072899999999999
- type: nauc_map_at_20_max
value: -24.0998
- type: nauc_map_at_20_std
value: -16.9043
- type: nauc_map_at_20_diff1
value: -12.5078
- type: nauc_map_at_100_max
value: -19.8743
- type: nauc_map_at_100_std
value: -14.344299999999999
- type: nauc_map_at_100_diff1
value: -9.7229
- type: nauc_map_at_1000_max
value: -17.7073
- type: nauc_map_at_1000_std
value: -13.0328
- type: nauc_map_at_1000_diff1
value: -9.25
- type: nauc_recall_at_1_max
value: -39.0297
- type: nauc_recall_at_1_std
value: -45.7382
- type: nauc_recall_at_1_diff1
value: -8.7843
- type: nauc_recall_at_3_max
value: -20.951800000000002
- type: nauc_recall_at_3_std
value: -1.3875
- type: nauc_recall_at_3_diff1
value: -32.6596
- type: nauc_recall_at_5_max
value: -18.723300000000002
- type: nauc_recall_at_5_std
value: -5.7615
- type: nauc_recall_at_5_diff1
value: -3.8796999999999997
- type: nauc_recall_at_10_max
value: -22.3454
- type: nauc_recall_at_10_std
value: -13.831199999999999
- type: nauc_recall_at_10_diff1
value: -9.0449
- type: nauc_recall_at_20_max
value: -17.8615
- type: nauc_recall_at_20_std
value: -10.921899999999999
- type: nauc_recall_at_20_diff1
value: -12.389100000000001
- type: nauc_recall_at_100_max
value: -6.7801
- type: nauc_recall_at_100_std
value: -5.249899999999999
- type: nauc_recall_at_100_diff1
value: -2.3929
- type: nauc_recall_at_1000_max
value: -5.3346
- type: nauc_recall_at_1000_std
value: -7.7999
- type: nauc_recall_at_1000_diff1
value: -5.005
- type: nauc_precision_at_1_max
value: -39.0297
- type: nauc_precision_at_1_std
value: -45.7382
- type: nauc_precision_at_1_diff1
value: -8.7843
- type: nauc_precision_at_3_max
value: -20.951800000000002
- type: nauc_precision_at_3_std
value: -1.3875
- type: nauc_precision_at_3_diff1
value: -32.6596
- type: nauc_precision_at_5_max
value: -18.723300000000002
- type: nauc_precision_at_5_std
value: -5.7615
- type: nauc_precision_at_5_diff1
value: -3.8796999999999997
- type: nauc_precision_at_10_max
value: -22.3454
- type: nauc_precision_at_10_std
value: -13.831199999999999
- type: nauc_precision_at_10_diff1
value: -9.0449
- type: nauc_precision_at_20_max
value: -17.8615
- type: nauc_precision_at_20_std
value: -10.921899999999999
- type: nauc_precision_at_20_diff1
value: -12.389100000000001
- type: nauc_precision_at_100_max
value: -6.7801
- type: nauc_precision_at_100_std
value: -5.249899999999999
- type: nauc_precision_at_100_diff1
value: -2.3929
- type: nauc_precision_at_1000_max
value: -5.3346
- type: nauc_precision_at_1000_std
value: -7.7999
- type: nauc_precision_at_1000_diff1
value: -5.005
- type: nauc_mrr_at_1_max
value: -39.0297
- type: nauc_mrr_at_1_std
value: -45.7382
- type: nauc_mrr_at_1_diff1
value: -8.7843
- type: nauc_mrr_at_3_max
value: -27.5256
- type: nauc_mrr_at_3_std
value: -17.515
- type: nauc_mrr_at_3_diff1
value: -23.9777
- type: nauc_mrr_at_5_max
value: -24.8037
- type: nauc_mrr_at_5_std
value: -16.636699999999998
- type: nauc_mrr_at_5_diff1
value: -8.8785
- type: nauc_mrr_at_10_max
value: -25.373800000000003
- type: nauc_mrr_at_10_std
value: -17.8539
- type: nauc_mrr_at_10_diff1
value: -11.072899999999999
- type: nauc_mrr_at_20_max
value: -24.0998
- type: nauc_mrr_at_20_std
value: -16.9043
- type: nauc_mrr_at_20_diff1
value: -12.5078
- type: nauc_mrr_at_100_max
value: -19.8743
- type: nauc_mrr_at_100_std
value: -14.344299999999999
- type: nauc_mrr_at_100_diff1
value: -9.7229
- type: nauc_mrr_at_1000_max
value: -17.7073
- type: nauc_mrr_at_1000_std
value: -13.0328
- type: nauc_mrr_at_1000_diff1
value: -9.25
- type: main_score
value: 0.46499999999999997
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-zho)
type: facebook/mlqa
config: ara-zho
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.105
- type: ndcg_at_3
value: 0.197
- type: ndcg_at_5
value: 0.28200000000000003
- type: ndcg_at_10
value: 0.45799999999999996
- type: ndcg_at_20
value: 0.695
- type: ndcg_at_100
value: 1.595
- type: ndcg_at_1000
value: 7.693
- type: map_at_1
value: 0.105
- type: map_at_3
value: 0.174
- type: map_at_5
value: 0.22100000000000003
- type: map_at_10
value: 0.28800000000000003
- type: map_at_20
value: 0.35200000000000004
- type: map_at_100
value: 0.455
- type: map_at_1000
value: 0.5930000000000001
- type: recall_at_1
value: 0.105
- type: recall_at_3
value: 0.262
- type: recall_at_5
value: 0.471
- type: recall_at_10
value: 1.046
- type: recall_at_20
value: 1.9869999999999999
- type: recall_at_100
value: 7.165000000000001
- type: recall_at_1000
value: 60.826
- type: precision_at_1
value: 0.105
- type: precision_at_3
value: 0.087
- type: precision_at_5
value: 0.094
- type: precision_at_10
value: 0.105
- type: precision_at_20
value: 0.099
- type: precision_at_100
value: 0.07200000000000001
- type: precision_at_1000
value: 0.061
- type: mrr_at_1
value: 0.1046
- type: mrr_at_3
value: 0.1743
- type: mrr_at_5
value: 0.22139999999999999
- type: mrr_at_10
value: 0.28809999999999997
- type: mrr_at_20
value: 0.3525
- type: mrr_at_100
value: 0.45510000000000006
- type: mrr_at_1000
value: 0.5931
- type: nauc_ndcg_at_1_max
value: 54.9196
- type: nauc_ndcg_at_1_std
value: 29.255399999999998
- type: nauc_ndcg_at_1_diff1
value: 83.0875
- type: nauc_ndcg_at_3_max
value: 55.1068
- type: nauc_ndcg_at_3_std
value: 43.5827
- type: nauc_ndcg_at_3_diff1
value: 65.4072
- type: nauc_ndcg_at_5_max
value: 60.8846
- type: nauc_ndcg_at_5_std
value: 53.4801
- type: nauc_ndcg_at_5_diff1
value: 52.855700000000006
- type: nauc_ndcg_at_10_max
value: 42.187000000000005
- type: nauc_ndcg_at_10_std
value: 41.0796
- type: nauc_ndcg_at_10_diff1
value: 31.4853
- type: nauc_ndcg_at_20_max
value: 39.556599999999996
- type: nauc_ndcg_at_20_std
value: 39.8692
- type: nauc_ndcg_at_20_diff1
value: 28.9452
- type: nauc_ndcg_at_100_max
value: 20.7679
- type: nauc_ndcg_at_100_std
value: 23.0806
- type: nauc_ndcg_at_100_diff1
value: 15.4211
- type: nauc_ndcg_at_1000_max
value: 16.6114
- type: nauc_ndcg_at_1000_std
value: 16.4112
- type: nauc_ndcg_at_1000_diff1
value: 10.213700000000001
- type: nauc_map_at_1_max
value: 54.9196
- type: nauc_map_at_1_std
value: 29.255399999999998
- type: nauc_map_at_1_diff1
value: 83.0875
- type: nauc_map_at_3_max
value: 57.2075
- type: nauc_map_at_3_std
value: 43.4043
- type: nauc_map_at_3_diff1
value: 69.78529999999999
- type: nauc_map_at_5_max
value: 60.711999999999996
- type: nauc_map_at_5_std
value: 50.112
- type: nauc_map_at_5_diff1
value: 60.0604
- type: nauc_map_at_10_max
value: 49.7578
- type: nauc_map_at_10_std
value: 43.871300000000005
- type: nauc_map_at_10_diff1
value: 45.129599999999996
- type: nauc_map_at_20_max
value: 46.7772
- type: nauc_map_at_20_std
value: 43.0928
- type: nauc_map_at_20_diff1
value: 40.8293
- type: nauc_map_at_100_max
value: 37.595299999999995
- type: nauc_map_at_100_std
value: 35.288199999999996
- type: nauc_map_at_100_diff1
value: 32.1313
- type: nauc_map_at_1000_max
value: 34.822199999999995
- type: nauc_map_at_1000_std
value: 32.6604
- type: nauc_map_at_1000_diff1
value: 29.493599999999997
- type: nauc_recall_at_1_max
value: 54.9196
- type: nauc_recall_at_1_std
value: 29.255399999999998
- type: nauc_recall_at_1_diff1
value: 83.0875
- type: nauc_recall_at_3_max
value: 50.4794
- type: nauc_recall_at_3_std
value: 43.4043
- type: nauc_recall_at_3_diff1
value: 56.4831
- type: nauc_recall_at_5_max
value: 61.213499999999996
- type: nauc_recall_at_5_std
value: 58.540099999999995
- type: nauc_recall_at_5_diff1
value: 42.0099
- type: nauc_recall_at_10_max
value: 33.8003
- type: nauc_recall_at_10_std
value: 37.2919
- type: nauc_recall_at_10_diff1
value: 17.9128
- type: nauc_recall_at_20_max
value: 34.3856
- type: nauc_recall_at_20_std
value: 36.9134
- type: nauc_recall_at_20_diff1
value: 21.3988
- type: nauc_recall_at_100_max
value: 14.2024
- type: nauc_recall_at_100_std
value: 17.9803
- type: nauc_recall_at_100_diff1
value: 10.1473
- type: nauc_recall_at_1000_max
value: 12.4813
- type: nauc_recall_at_1000_std
value: 11.7174
- type: nauc_recall_at_1000_diff1
value: 5.5424
- type: nauc_precision_at_1_max
value: 54.9196
- type: nauc_precision_at_1_std
value: 29.255399999999998
- type: nauc_precision_at_1_diff1
value: 83.0875
- type: nauc_precision_at_3_max
value: 50.4794
- type: nauc_precision_at_3_std
value: 43.4043
- type: nauc_precision_at_3_diff1
value: 56.4831
- type: nauc_precision_at_5_max
value: 61.213499999999996
- type: nauc_precision_at_5_std
value: 58.540099999999995
- type: nauc_precision_at_5_diff1
value: 42.0099
- type: nauc_precision_at_10_max
value: 33.8003
- type: nauc_precision_at_10_std
value: 37.2919
- type: nauc_precision_at_10_diff1
value: 17.9128
- type: nauc_precision_at_20_max
value: 34.3856
- type: nauc_precision_at_20_std
value: 36.9134
- type: nauc_precision_at_20_diff1
value: 21.3988
- type: nauc_precision_at_100_max
value: 14.2024
- type: nauc_precision_at_100_std
value: 17.9803
- type: nauc_precision_at_100_diff1
value: 10.1473
- type: nauc_precision_at_1000_max
value: 12.4813
- type: nauc_precision_at_1000_std
value: 11.7174
- type: nauc_precision_at_1000_diff1
value: 5.5424
- type: nauc_mrr_at_1_max
value: 54.9196
- type: nauc_mrr_at_1_std
value: 29.255399999999998
- type: nauc_mrr_at_1_diff1
value: 83.0875
- type: nauc_mrr_at_3_max
value: 57.2075
- type: nauc_mrr_at_3_std
value: 43.4043
- type: nauc_mrr_at_3_diff1
value: 69.78529999999999
- type: nauc_mrr_at_5_max
value: 60.711999999999996
- type: nauc_mrr_at_5_std
value: 50.112
- type: nauc_mrr_at_5_diff1
value: 60.0604
- type: nauc_mrr_at_10_max
value: 49.7578
- type: nauc_mrr_at_10_std
value: 43.871300000000005
- type: nauc_mrr_at_10_diff1
value: 45.129599999999996
- type: nauc_mrr_at_20_max
value: 46.7772
- type: nauc_mrr_at_20_std
value: 43.0928
- type: nauc_mrr_at_20_diff1
value: 40.8293
- type: nauc_mrr_at_100_max
value: 37.595299999999995
- type: nauc_mrr_at_100_std
value: 35.288199999999996
- type: nauc_mrr_at_100_diff1
value: 32.1313
- type: nauc_mrr_at_1000_max
value: 34.822199999999995
- type: nauc_mrr_at_1000_std
value: 32.6604
- type: nauc_mrr_at_1000_diff1
value: 29.493599999999997
- type: main_score
value: 0.45799999999999996
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-ara)
type: facebook/mlqa
config: deu-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.243
- type: ndcg_at_3
value: 0.5329999999999999
- type: ndcg_at_5
value: 0.7080000000000001
- type: ndcg_at_10
value: 0.822
- type: ndcg_at_20
value: 1.149
- type: ndcg_at_100
value: 2.443
- type: ndcg_at_1000
value: 9.719999999999999
- type: map_at_1
value: 0.243
- type: map_at_3
value: 0.46499999999999997
- type: map_at_5
value: 0.562
- type: map_at_10
value: 0.607
- type: map_at_20
value: 0.692
- type: map_at_100
value: 0.84
- type: map_at_1000
value: 1.014
- type: recall_at_1
value: 0.243
- type: recall_at_3
value: 0.728
- type: recall_at_5
value: 1.1520000000000001
- type: recall_at_10
value: 1.516
- type: recall_at_20
value: 2.85
- type: recall_at_100
value: 10.309
- type: recall_at_1000
value: 73.681
- type: precision_at_1
value: 0.243
- type: precision_at_3
value: 0.243
- type: precision_at_5
value: 0.22999999999999998
- type: precision_at_10
value: 0.152
- type: precision_at_20
value: 0.14300000000000002
- type: precision_at_100
value: 0.10300000000000001
- type: precision_at_1000
value: 0.074
- type: mrr_at_1
value: 0.2426
- type: mrr_at_3
value: 0.46490000000000004
- type: mrr_at_5
value: 0.562
- type: mrr_at_10
value: 0.6072
- type: mrr_at_20
value: 0.6916
- type: mrr_at_100
value: 0.8397
- type: mrr_at_1000
value: 1.0143
- type: nauc_ndcg_at_1_max
value: 34.470800000000004
- type: nauc_ndcg_at_1_std
value: 17.7296
- type: nauc_ndcg_at_1_diff1
value: 25.4054
- type: nauc_ndcg_at_3_max
value: 51.27589999999999
- type: nauc_ndcg_at_3_std
value: 29.8213
- type: nauc_ndcg_at_3_diff1
value: 19.96
- type: nauc_ndcg_at_5_max
value: 58.739799999999995
- type: nauc_ndcg_at_5_std
value: 24.7685
- type: nauc_ndcg_at_5_diff1
value: 17.957
- type: nauc_ndcg_at_10_max
value: 54.85060000000001
- type: nauc_ndcg_at_10_std
value: 19.6216
- type: nauc_ndcg_at_10_diff1
value: 16.5672
- type: nauc_ndcg_at_20_max
value: 45.870400000000004
- type: nauc_ndcg_at_20_std
value: 14.829500000000001
- type: nauc_ndcg_at_20_diff1
value: 18.0996
- type: nauc_ndcg_at_100_max
value: 33.6706
- type: nauc_ndcg_at_100_std
value: 10.0954
- type: nauc_ndcg_at_100_diff1
value: 9.6092
- type: nauc_ndcg_at_1000_max
value: 25.971300000000003
- type: nauc_ndcg_at_1000_std
value: 4.9195
- type: nauc_ndcg_at_1000_diff1
value: 7.0839
- type: nauc_map_at_1_max
value: 34.470800000000004
- type: nauc_map_at_1_std
value: 17.7296
- type: nauc_map_at_1_diff1
value: 25.4054
- type: nauc_map_at_3_max
value: 49.3966
- type: nauc_map_at_3_std
value: 27.9153
- type: nauc_map_at_3_diff1
value: 20.7442
- type: nauc_map_at_5_max
value: 54.789500000000004
- type: nauc_map_at_5_std
value: 24.4111
- type: nauc_map_at_5_diff1
value: 18.7472
- type: nauc_map_at_10_max
value: 53.115
- type: nauc_map_at_10_std
value: 21.7997
- type: nauc_map_at_10_diff1
value: 18.1703
- type: nauc_map_at_20_max
value: 49.4189
- type: nauc_map_at_20_std
value: 19.4909
- type: nauc_map_at_20_diff1
value: 18.6365
- type: nauc_map_at_100_max
value: 45.3179
- type: nauc_map_at_100_std
value: 17.7435
- type: nauc_map_at_100_diff1
value: 16.0309
- type: nauc_map_at_1000_max
value: 43.352000000000004
- type: nauc_map_at_1000_std
value: 16.3267
- type: nauc_map_at_1000_diff1
value: 15.204300000000002
- type: nauc_recall_at_1_max
value: 34.470800000000004
- type: nauc_recall_at_1_std
value: 17.7296
- type: nauc_recall_at_1_diff1
value: 25.4054
- type: nauc_recall_at_3_max
value: 54.6788
- type: nauc_recall_at_3_std
value: 33.4369
- type: nauc_recall_at_3_diff1
value: 18.488
- type: nauc_recall_at_5_max
value: 64.8516
- type: nauc_recall_at_5_std
value: 25.182100000000002
- type: nauc_recall_at_5_diff1
value: 16.9772
- type: nauc_recall_at_10_max
value: 56.427099999999996
- type: nauc_recall_at_10_std
value: 15.958400000000001
- type: nauc_recall_at_10_diff1
value: 14.3287
- type: nauc_recall_at_20_max
value: 41.0315
- type: nauc_recall_at_20_std
value: 9.7701
- type: nauc_recall_at_20_diff1
value: 17.8564
- type: nauc_recall_at_100_max
value: 27.0754
- type: nauc_recall_at_100_std
value: 6.103
- type: nauc_recall_at_100_diff1
value: 5.9928
- type: nauc_recall_at_1000_max
value: 16.7685
- type: nauc_recall_at_1000_std
value: -0.752
- type: nauc_recall_at_1000_diff1
value: 3.0706
- type: nauc_precision_at_1_max
value: 34.470800000000004
- type: nauc_precision_at_1_std
value: 17.7296
- type: nauc_precision_at_1_diff1
value: 25.4054
- type: nauc_precision_at_3_max
value: 54.6788
- type: nauc_precision_at_3_std
value: 33.4369
- type: nauc_precision_at_3_diff1
value: 18.488
- type: nauc_precision_at_5_max
value: 64.8516
- type: nauc_precision_at_5_std
value: 25.182100000000002
- type: nauc_precision_at_5_diff1
value: 16.9772
- type: nauc_precision_at_10_max
value: 56.427099999999996
- type: nauc_precision_at_10_std
value: 15.958400000000001
- type: nauc_precision_at_10_diff1
value: 14.3287
- type: nauc_precision_at_20_max
value: 41.0315
- type: nauc_precision_at_20_std
value: 9.7701
- type: nauc_precision_at_20_diff1
value: 17.8564
- type: nauc_precision_at_100_max
value: 27.0754
- type: nauc_precision_at_100_std
value: 6.103
- type: nauc_precision_at_100_diff1
value: 5.9928
- type: nauc_precision_at_1000_max
value: 16.7685
- type: nauc_precision_at_1000_std
value: -0.752
- type: nauc_precision_at_1000_diff1
value: 3.0706
- type: nauc_mrr_at_1_max
value: 34.470800000000004
- type: nauc_mrr_at_1_std
value: 17.7296
- type: nauc_mrr_at_1_diff1
value: 25.4054
- type: nauc_mrr_at_3_max
value: 49.3966
- type: nauc_mrr_at_3_std
value: 27.9153
- type: nauc_mrr_at_3_diff1
value: 20.7442
- type: nauc_mrr_at_5_max
value: 54.789500000000004
- type: nauc_mrr_at_5_std
value: 24.4111
- type: nauc_mrr_at_5_diff1
value: 18.7472
- type: nauc_mrr_at_10_max
value: 53.115
- type: nauc_mrr_at_10_std
value: 21.7997
- type: nauc_mrr_at_10_diff1
value: 18.1703
- type: nauc_mrr_at_20_max
value: 49.4189
- type: nauc_mrr_at_20_std
value: 19.4909
- type: nauc_mrr_at_20_diff1
value: 18.6365
- type: nauc_mrr_at_100_max
value: 45.3179
- type: nauc_mrr_at_100_std
value: 17.7435
- type: nauc_mrr_at_100_diff1
value: 16.0309
- type: nauc_mrr_at_1000_max
value: 43.352000000000004
- type: nauc_mrr_at_1000_std
value: 16.3267
- type: nauc_mrr_at_1000_diff1
value: 15.204300000000002
- type: main_score
value: 0.822
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-ara)
type: facebook/mlqa
config: eng-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.375
- type: ndcg_at_3
value: 0.5780000000000001
- type: ndcg_at_5
value: 0.654
- type: ndcg_at_10
value: 0.8250000000000001
- type: ndcg_at_20
value: 1.034
- type: ndcg_at_100
value: 1.7930000000000001
- type: ndcg_at_1000
value: 4.977
- type: map_at_1
value: 0.375
- type: map_at_3
value: 0.525
- type: map_at_5
value: 0.567
- type: map_at_10
value: 0.638
- type: map_at_20
value: 0.696
- type: map_at_100
value: 0.788
- type: map_at_1000
value: 0.868
- type: recall_at_1
value: 0.375
- type: recall_at_3
value: 0.731
- type: recall_at_5
value: 0.919
- type: recall_at_10
value: 1.444
- type: recall_at_20
value: 2.2689999999999997
- type: recall_at_100
value: 6.563
- type: recall_at_1000
value: 34.099000000000004
- type: precision_at_1
value: 0.375
- type: precision_at_3
value: 0.244
- type: precision_at_5
value: 0.184
- type: precision_at_10
value: 0.14400000000000002
- type: precision_at_20
value: 0.11299999999999999
- type: precision_at_100
value: 0.066
- type: precision_at_1000
value: 0.034
- type: mrr_at_1
value: 0.375
- type: mrr_at_3
value: 0.525
- type: mrr_at_5
value: 0.5672
- type: mrr_at_10
value: 0.6383
- type: mrr_at_20
value: 0.6961
- type: mrr_at_100
value: 0.7882
- type: mrr_at_1000
value: 0.8677
- type: nauc_ndcg_at_1_max
value: 56.5121
- type: nauc_ndcg_at_1_std
value: 19.2292
- type: nauc_ndcg_at_1_diff1
value: 18.6031
- type: nauc_ndcg_at_3_max
value: 53.795899999999996
- type: nauc_ndcg_at_3_std
value: 13.674900000000001
- type: nauc_ndcg_at_3_diff1
value: 14.913699999999999
- type: nauc_ndcg_at_5_max
value: 54.0713
- type: nauc_ndcg_at_5_std
value: 16.5134
- type: nauc_ndcg_at_5_diff1
value: 13.835
- type: nauc_ndcg_at_10_max
value: 47.3624
- type: nauc_ndcg_at_10_std
value: 14.0322
- type: nauc_ndcg_at_10_diff1
value: 12.4765
- type: nauc_ndcg_at_20_max
value: 40.5382
- type: nauc_ndcg_at_20_std
value: 13.1801
- type: nauc_ndcg_at_20_diff1
value: 10.8866
- type: nauc_ndcg_at_100_max
value: 27.4861
- type: nauc_ndcg_at_100_std
value: 9.985
- type: nauc_ndcg_at_100_diff1
value: 5.003
- type: nauc_ndcg_at_1000_max
value: 14.236299999999998
- type: nauc_ndcg_at_1000_std
value: 5.5438
- type: nauc_ndcg_at_1000_diff1
value: 3.5621
- type: nauc_map_at_1_max
value: 56.5121
- type: nauc_map_at_1_std
value: 19.2292
- type: nauc_map_at_1_diff1
value: 18.6031
- type: nauc_map_at_3_max
value: 54.069599999999994
- type: nauc_map_at_3_std
value: 14.5317
- type: nauc_map_at_3_diff1
value: 15.2434
- type: nauc_map_at_5_max
value: 54.295
- type: nauc_map_at_5_std
value: 16.362
- type: nauc_map_at_5_diff1
value: 14.560200000000002
- type: nauc_map_at_10_max
value: 50.6652
- type: nauc_map_at_10_std
value: 14.840700000000002
- type: nauc_map_at_10_diff1
value: 13.7079
- type: nauc_map_at_20_max
value: 47.6818
- type: nauc_map_at_20_std
value: 14.355599999999999
- type: nauc_map_at_20_diff1
value: 12.894400000000001
- type: nauc_map_at_100_max
value: 43.4343
- type: nauc_map_at_100_std
value: 13.241
- type: nauc_map_at_100_diff1
value: 11.0841
- type: nauc_map_at_1000_max
value: 40.872
- type: nauc_map_at_1000_std
value: 12.5729
- type: nauc_map_at_1000_diff1
value: 10.5395
- type: nauc_recall_at_1_max
value: 56.5121
- type: nauc_recall_at_1_std
value: 19.2292
- type: nauc_recall_at_1_diff1
value: 18.6031
- type: nauc_recall_at_3_max
value: 53.2864
- type: nauc_recall_at_3_std
value: 11.929499999999999
- type: nauc_recall_at_3_diff1
value: 14.321200000000001
- type: nauc_recall_at_5_max
value: 53.689
- type: nauc_recall_at_5_std
value: 16.997
- type: nauc_recall_at_5_diff1
value: 12.4956
- type: nauc_recall_at_10_max
value: 42.0383
- type: nauc_recall_at_10_std
value: 12.9387
- type: nauc_recall_at_10_diff1
value: 10.699
- type: nauc_recall_at_20_max
value: 31.483
- type: nauc_recall_at_20_std
value: 11.967500000000001
- type: nauc_recall_at_20_diff1
value: 8.6104
- type: nauc_recall_at_100_max
value: 16.9294
- type: nauc_recall_at_100_std
value: 8.0626
- type: nauc_recall_at_100_diff1
value: 0.9781
- type: nauc_recall_at_1000_max
value: 5.0692
- type: nauc_recall_at_1000_std
value: 2.8923
- type: nauc_recall_at_1000_diff1
value: 1.661
- type: nauc_precision_at_1_max
value: 56.5121
- type: nauc_precision_at_1_std
value: 19.2292
- type: nauc_precision_at_1_diff1
value: 18.6031
- type: nauc_precision_at_3_max
value: 53.2864
- type: nauc_precision_at_3_std
value: 11.929499999999999
- type: nauc_precision_at_3_diff1
value: 14.321200000000001
- type: nauc_precision_at_5_max
value: 53.689
- type: nauc_precision_at_5_std
value: 16.997
- type: nauc_precision_at_5_diff1
value: 12.4956
- type: nauc_precision_at_10_max
value: 42.0383
- type: nauc_precision_at_10_std
value: 12.9387
- type: nauc_precision_at_10_diff1
value: 10.699
- type: nauc_precision_at_20_max
value: 31.483
- type: nauc_precision_at_20_std
value: 11.967500000000001
- type: nauc_precision_at_20_diff1
value: 8.6104
- type: nauc_precision_at_100_max
value: 16.9294
- type: nauc_precision_at_100_std
value: 8.0626
- type: nauc_precision_at_100_diff1
value: 0.9781
- type: nauc_precision_at_1000_max
value: 5.0423
- type: nauc_precision_at_1000_std
value: 2.8774
- type: nauc_precision_at_1000_diff1
value: 1.6759
- type: nauc_mrr_at_1_max
value: 56.5121
- type: nauc_mrr_at_1_std
value: 19.2292
- type: nauc_mrr_at_1_diff1
value: 18.6031
- type: nauc_mrr_at_3_max
value: 54.069599999999994
- type: nauc_mrr_at_3_std
value: 14.5317
- type: nauc_mrr_at_3_diff1
value: 15.2434
- type: nauc_mrr_at_5_max
value: 54.295
- type: nauc_mrr_at_5_std
value: 16.362
- type: nauc_mrr_at_5_diff1
value: 14.560200000000002
- type: nauc_mrr_at_10_max
value: 50.6652
- type: nauc_mrr_at_10_std
value: 14.840700000000002
- type: nauc_mrr_at_10_diff1
value: 13.7079
- type: nauc_mrr_at_20_max
value: 47.6818
- type: nauc_mrr_at_20_std
value: 14.355599999999999
- type: nauc_mrr_at_20_diff1
value: 12.894400000000001
- type: nauc_mrr_at_100_max
value: 43.4343
- type: nauc_mrr_at_100_std
value: 13.241
- type: nauc_mrr_at_100_diff1
value: 11.0841
- type: nauc_mrr_at_1000_max
value: 40.8708
- type: nauc_mrr_at_1000_std
value: 12.5722
- type: nauc_mrr_at_1000_diff1
value: 10.54
- type: main_score
value: 0.8250000000000001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-ara)
type: facebook/mlqa
config: spa-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.253
- type: ndcg_at_3
value: 0.418
- type: ndcg_at_5
value: 0.461
- type: ndcg_at_10
value: 0.715
- type: ndcg_at_20
value: 0.9450000000000001
- type: ndcg_at_100
value: 2.331
- type: ndcg_at_1000
value: 8.574
- type: map_at_1
value: 0.253
- type: map_at_3
value: 0.371
- type: map_at_5
value: 0.396
- type: map_at_10
value: 0.505
- type: map_at_20
value: 0.569
- type: map_at_100
value: 0.733
- type: map_at_1000
value: 0.8829999999999999
- type: recall_at_1
value: 0.253
- type: recall_at_3
value: 0.5559999999999999
- type: recall_at_5
value: 0.657
- type: recall_at_10
value: 1.4160000000000001
- type: recall_at_20
value: 2.326
- type: recall_at_100
value: 10.212
- type: recall_at_1000
value: 64.56
- type: precision_at_1
value: 0.253
- type: precision_at_3
value: 0.185
- type: precision_at_5
value: 0.131
- type: precision_at_10
value: 0.14200000000000002
- type: precision_at_20
value: 0.116
- type: precision_at_100
value: 0.10200000000000001
- type: precision_at_1000
value: 0.065
- type: mrr_at_1
value: 0.25279999999999997
- type: mrr_at_3
value: 0.3707
- type: mrr_at_5
value: 0.396
- type: mrr_at_10
value: 0.5054000000000001
- type: mrr_at_20
value: 0.5688000000000001
- type: mrr_at_100
value: 0.7331
- type: mrr_at_1000
value: 0.8831
- type: nauc_ndcg_at_1_max
value: 51.9741
- type: nauc_ndcg_at_1_std
value: 46.907700000000006
- type: nauc_ndcg_at_1_diff1
value: 30.1964
- type: nauc_ndcg_at_3_max
value: 41.3447
- type: nauc_ndcg_at_3_std
value: 24.360599999999998
- type: nauc_ndcg_at_3_diff1
value: 18.8418
- type: nauc_ndcg_at_5_max
value: 41.0319
- type: nauc_ndcg_at_5_std
value: 25.809199999999997
- type: nauc_ndcg_at_5_diff1
value: 24.909100000000002
- type: nauc_ndcg_at_10_max
value: 36.6761
- type: nauc_ndcg_at_10_std
value: 23.1623
- type: nauc_ndcg_at_10_diff1
value: 24.2909
- type: nauc_ndcg_at_20_max
value: 33.2627
- type: nauc_ndcg_at_20_std
value: 19.0886
- type: nauc_ndcg_at_20_diff1
value: 18.6171
- type: nauc_ndcg_at_100_max
value: 22.1033
- type: nauc_ndcg_at_100_std
value: 10.6684
- type: nauc_ndcg_at_100_diff1
value: 6.77
- type: nauc_ndcg_at_1000_max
value: 17.8432
- type: nauc_ndcg_at_1000_std
value: 5.2092
- type: nauc_ndcg_at_1000_diff1
value: 5.8879
- type: nauc_map_at_1_max
value: 51.9741
- type: nauc_map_at_1_std
value: 46.907700000000006
- type: nauc_map_at_1_diff1
value: 30.1964
- type: nauc_map_at_3_max
value: 42.766799999999996
- type: nauc_map_at_3_std
value: 29.0518
- type: nauc_map_at_3_diff1
value: 20.8244
- type: nauc_map_at_5_max
value: 42.464600000000004
- type: nauc_map_at_5_std
value: 29.7317
- type: nauc_map_at_5_diff1
value: 24.799699999999998
- type: nauc_map_at_10_max
value: 39.827600000000004
- type: nauc_map_at_10_std
value: 27.3121
- type: nauc_map_at_10_diff1
value: 24.6463
- type: nauc_map_at_20_max
value: 37.9365
- type: nauc_map_at_20_std
value: 24.8287
- type: nauc_map_at_20_diff1
value: 21.9878
- type: nauc_map_at_100_max
value: 33.333
- type: nauc_map_at_100_std
value: 20.2466
- type: nauc_map_at_100_diff1
value: 16.561
- type: nauc_map_at_1000_max
value: 31.8401
- type: nauc_map_at_1000_std
value: 18.740499999999997
- type: nauc_map_at_1000_diff1
value: 15.820400000000001
- type: nauc_recall_at_1_max
value: 51.9741
- type: nauc_recall_at_1_std
value: 46.907700000000006
- type: nauc_recall_at_1_diff1
value: 30.1964
- type: nauc_recall_at_3_max
value: 38.6984
- type: nauc_recall_at_3_std
value: 15.0644
- type: nauc_recall_at_3_diff1
value: 14.9959
- type: nauc_recall_at_5_max
value: 38.5959
- type: nauc_recall_at_5_std
value: 18.8551
- type: nauc_recall_at_5_diff1
value: 25.474200000000003
- type: nauc_recall_at_10_max
value: 32.6875
- type: nauc_recall_at_10_std
value: 18.4863
- type: nauc_recall_at_10_diff1
value: 23.8654
- type: nauc_recall_at_20_max
value: 28.6992
- type: nauc_recall_at_20_std
value: 14.019100000000002
- type: nauc_recall_at_20_diff1
value: 14.965100000000001
- type: nauc_recall_at_100_max
value: 16.8806
- type: nauc_recall_at_100_std
value: 7.1583
- type: nauc_recall_at_100_diff1
value: 2.6362
- type: nauc_recall_at_1000_max
value: 12.6884
- type: nauc_recall_at_1000_std
value: 0.3778
- type: nauc_recall_at_1000_diff1
value: 2.9179
- type: nauc_precision_at_1_max
value: 51.9741
- type: nauc_precision_at_1_std
value: 46.907700000000006
- type: nauc_precision_at_1_diff1
value: 30.1964
- type: nauc_precision_at_3_max
value: 38.6984
- type: nauc_precision_at_3_std
value: 15.0644
- type: nauc_precision_at_3_diff1
value: 14.9959
- type: nauc_precision_at_5_max
value: 38.5959
- type: nauc_precision_at_5_std
value: 18.8551
- type: nauc_precision_at_5_diff1
value: 25.474200000000003
- type: nauc_precision_at_10_max
value: 32.6875
- type: nauc_precision_at_10_std
value: 18.4863
- type: nauc_precision_at_10_diff1
value: 23.8654
- type: nauc_precision_at_20_max
value: 28.6992
- type: nauc_precision_at_20_std
value: 14.019100000000002
- type: nauc_precision_at_20_diff1
value: 14.965100000000001
- type: nauc_precision_at_100_max
value: 16.8806
- type: nauc_precision_at_100_std
value: 7.1583
- type: nauc_precision_at_100_diff1
value: 2.6362
- type: nauc_precision_at_1000_max
value: 12.6884
- type: nauc_precision_at_1000_std
value: 0.3778
- type: nauc_precision_at_1000_diff1
value: 2.9179
- type: nauc_mrr_at_1_max
value: 51.9741
- type: nauc_mrr_at_1_std
value: 46.907700000000006
- type: nauc_mrr_at_1_diff1
value: 30.1964
- type: nauc_mrr_at_3_max
value: 42.766799999999996
- type: nauc_mrr_at_3_std
value: 29.0518
- type: nauc_mrr_at_3_diff1
value: 20.8244
- type: nauc_mrr_at_5_max
value: 42.464600000000004
- type: nauc_mrr_at_5_std
value: 29.7317
- type: nauc_mrr_at_5_diff1
value: 24.799699999999998
- type: nauc_mrr_at_10_max
value: 39.827600000000004
- type: nauc_mrr_at_10_std
value: 27.3121
- type: nauc_mrr_at_10_diff1
value: 24.6463
- type: nauc_mrr_at_20_max
value: 37.9365
- type: nauc_mrr_at_20_std
value: 24.8287
- type: nauc_mrr_at_20_diff1
value: 21.9878
- type: nauc_mrr_at_100_max
value: 33.333
- type: nauc_mrr_at_100_std
value: 20.2466
- type: nauc_mrr_at_100_diff1
value: 16.561
- type: nauc_mrr_at_1000_max
value: 31.8401
- type: nauc_mrr_at_1000_std
value: 18.740499999999997
- type: nauc_mrr_at_1000_diff1
value: 15.820400000000001
- type: main_score
value: 0.715
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (hin-ara)
type: facebook/mlqa
config: hin-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.328
- type: ndcg_at_3
value: 0.486
- type: ndcg_at_5
value: 0.683
- type: ndcg_at_10
value: 0.997
- type: ndcg_at_20
value: 1.365
- type: ndcg_at_100
value: 2.706
- type: ndcg_at_1000
value: 9.648
- type: map_at_1
value: 0.328
- type: map_at_3
value: 0.44600000000000006
- type: map_at_5
value: 0.553
- type: map_at_10
value: 0.6799999999999999
- type: map_at_20
value: 0.7779999999999999
- type: map_at_100
value: 0.935
- type: map_at_1000
value: 1.0999999999999999
- type: recall_at_1
value: 0.328
- type: recall_at_3
value: 0.601
- type: recall_at_5
value: 1.0919999999999999
- type: recall_at_10
value: 2.075
- type: recall_at_20
value: 3.55
- type: recall_at_100
value: 11.196
- type: recall_at_1000
value: 71.764
- type: precision_at_1
value: 0.328
- type: precision_at_3
value: 0.2
- type: precision_at_5
value: 0.218
- type: precision_at_10
value: 0.208
- type: precision_at_20
value: 0.17700000000000002
- type: precision_at_100
value: 0.11199999999999999
- type: precision_at_1000
value: 0.07200000000000001
- type: mrr_at_1
value: 0.3277
- type: mrr_at_3
value: 0.44600000000000006
- type: mrr_at_5
value: 0.5525
- type: mrr_at_10
value: 0.6796
- type: mrr_at_20
value: 0.7782
- type: mrr_at_100
value: 0.9353999999999999
- type: mrr_at_1000
value: 1.1002
- type: nauc_ndcg_at_1_max
value: 53.9859
- type: nauc_ndcg_at_1_std
value: -15.8864
- type: nauc_ndcg_at_1_diff1
value: 19.794600000000003
- type: nauc_ndcg_at_3_max
value: 50.3487
- type: nauc_ndcg_at_3_std
value: -15.716
- type: nauc_ndcg_at_3_diff1
value: 27.936299999999996
- type: nauc_ndcg_at_5_max
value: 40.6703
- type: nauc_ndcg_at_5_std
value: -14.965600000000002
- type: nauc_ndcg_at_5_diff1
value: 12.5167
- type: nauc_ndcg_at_10_max
value: 28.513500000000004
- type: nauc_ndcg_at_10_std
value: -12.0676
- type: nauc_ndcg_at_10_diff1
value: 9.7136
- type: nauc_ndcg_at_20_max
value: 23.6262
- type: nauc_ndcg_at_20_std
value: -12.1013
- type: nauc_ndcg_at_20_diff1
value: 9.2594
- type: nauc_ndcg_at_100_max
value: 13.739199999999999
- type: nauc_ndcg_at_100_std
value: -6.6952
- type: nauc_ndcg_at_100_diff1
value: 4.2473
- type: nauc_ndcg_at_1000_max
value: 9.275799999999998
- type: nauc_ndcg_at_1000_std
value: -5.5039
- type: nauc_ndcg_at_1000_diff1
value: 2.4499
- type: nauc_map_at_1_max
value: 53.9859
- type: nauc_map_at_1_std
value: -15.8864
- type: nauc_map_at_1_diff1
value: 19.794600000000003
- type: nauc_map_at_3_max
value: 51.153800000000004
- type: nauc_map_at_3_std
value: -15.7911
- type: nauc_map_at_3_diff1
value: 26.674599999999998
- type: nauc_map_at_5_max
value: 44.6463
- type: nauc_map_at_5_std
value: -15.310699999999999
- type: nauc_map_at_5_diff1
value: 16.8168
- type: nauc_map_at_10_max
value: 36.5886
- type: nauc_map_at_10_std
value: -13.2727
- type: nauc_map_at_10_diff1
value: 14.392199999999999
- type: nauc_map_at_20_max
value: 33.772200000000005
- type: nauc_map_at_20_std
value: -13.108500000000001
- type: nauc_map_at_20_diff1
value: 13.7855
- type: nauc_map_at_100_max
value: 28.4893
- type: nauc_map_at_100_std
value: -11.2989
- type: nauc_map_at_100_diff1
value: 11.4836
- type: nauc_map_at_1000_max
value: 26.9177
- type: nauc_map_at_1000_std
value: -11.165
- type: nauc_map_at_1000_diff1
value: 10.600999999999999
- type: nauc_recall_at_1_max
value: 53.9859
- type: nauc_recall_at_1_std
value: -15.8864
- type: nauc_recall_at_1_diff1
value: 19.794600000000003
- type: nauc_recall_at_3_max
value: 48.5745
- type: nauc_recall_at_3_std
value: -15.5412
- type: nauc_recall_at_3_diff1
value: 30.583900000000003
- type: nauc_recall_at_5_max
value: 34.0788
- type: nauc_recall_at_5_std
value: -14.3783
- type: nauc_recall_at_5_diff1
value: 4.9851
- type: nauc_recall_at_10_max
value: 19.0897
- type: nauc_recall_at_10_std
value: -10.734
- type: nauc_recall_at_10_diff1
value: 4.2515
- type: nauc_recall_at_20_max
value: 14.646
- type: nauc_recall_at_20_std
value: -11.3526
- type: nauc_recall_at_20_diff1
value: 5.4940999999999995
- type: nauc_recall_at_100_max
value: 7.383000000000001
- type: nauc_recall_at_100_std
value: -4.1648
- type: nauc_recall_at_100_diff1
value: 0.9353
- type: nauc_recall_at_1000_max
value: 2.4582
- type: nauc_recall_at_1000_std
value: -1.7946
- type: nauc_recall_at_1000_diff1
value: -0.0116
- type: nauc_precision_at_1_max
value: 53.9859
- type: nauc_precision_at_1_std
value: -15.8864
- type: nauc_precision_at_1_diff1
value: 19.794600000000003
- type: nauc_precision_at_3_max
value: 48.5745
- type: nauc_precision_at_3_std
value: -15.5412
- type: nauc_precision_at_3_diff1
value: 30.583900000000003
- type: nauc_precision_at_5_max
value: 34.0788
- type: nauc_precision_at_5_std
value: -14.3783
- type: nauc_precision_at_5_diff1
value: 4.9851
- type: nauc_precision_at_10_max
value: 19.0897
- type: nauc_precision_at_10_std
value: -10.734
- type: nauc_precision_at_10_diff1
value: 4.2515
- type: nauc_precision_at_20_max
value: 14.646
- type: nauc_precision_at_20_std
value: -11.3526
- type: nauc_precision_at_20_diff1
value: 5.4940999999999995
- type: nauc_precision_at_100_max
value: 7.383000000000001
- type: nauc_precision_at_100_std
value: -4.1648
- type: nauc_precision_at_100_diff1
value: 0.9353
- type: nauc_precision_at_1000_max
value: 2.4582
- type: nauc_precision_at_1000_std
value: -1.7946
- type: nauc_precision_at_1000_diff1
value: -0.0116
- type: nauc_mrr_at_1_max
value: 53.9859
- type: nauc_mrr_at_1_std
value: -15.8864
- type: nauc_mrr_at_1_diff1
value: 19.794600000000003
- type: nauc_mrr_at_3_max
value: 51.153800000000004
- type: nauc_mrr_at_3_std
value: -15.7911
- type: nauc_mrr_at_3_diff1
value: 26.674599999999998
- type: nauc_mrr_at_5_max
value: 44.6463
- type: nauc_mrr_at_5_std
value: -15.310699999999999
- type: nauc_mrr_at_5_diff1
value: 16.8168
- type: nauc_mrr_at_10_max
value: 36.5886
- type: nauc_mrr_at_10_std
value: -13.2727
- type: nauc_mrr_at_10_diff1
value: 14.392199999999999
- type: nauc_mrr_at_20_max
value: 33.772200000000005
- type: nauc_mrr_at_20_std
value: -13.108500000000001
- type: nauc_mrr_at_20_diff1
value: 13.7855
- type: nauc_mrr_at_100_max
value: 28.4893
- type: nauc_mrr_at_100_std
value: -11.2989
- type: nauc_mrr_at_100_diff1
value: 11.4836
- type: nauc_mrr_at_1000_max
value: 26.9177
- type: nauc_mrr_at_1000_std
value: -11.165
- type: nauc_mrr_at_1000_diff1
value: 10.600999999999999
- type: main_score
value: 0.997
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (vie-ara)
type: facebook/mlqa
config: vie-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.391
- type: ndcg_at_3
value: 0.612
- type: ndcg_at_5
value: 0.795
- type: ndcg_at_10
value: 0.9820000000000001
- type: ndcg_at_20
value: 1.239
- type: ndcg_at_100
value: 2.341
- type: ndcg_at_1000
value: 8.206
- type: map_at_1
value: 0.391
- type: map_at_3
value: 0.5539999999999999
- type: map_at_5
value: 0.656
- type: map_at_10
value: 0.733
- type: map_at_20
value: 0.8019999999999999
- type: map_at_100
value: 0.9329999999999999
- type: map_at_1000
value: 1.069
- type: recall_at_1
value: 0.391
- type: recall_at_3
value: 0.782
- type: recall_at_5
value: 1.221
- type: recall_at_10
value: 1.8079999999999998
- type: recall_at_20
value: 2.833
- type: recall_at_100
value: 9.086
- type: recall_at_1000
value: 60.479000000000006
- type: precision_at_1
value: 0.391
- type: precision_at_3
value: 0.261
- type: precision_at_5
value: 0.244
- type: precision_at_10
value: 0.181
- type: precision_at_20
value: 0.14200000000000002
- type: precision_at_100
value: 0.091
- type: precision_at_1000
value: 0.06
- type: mrr_at_1
value: 0.3908
- type: mrr_at_3
value: 0.5537000000000001
- type: mrr_at_5
value: 0.6562
- type: mrr_at_10
value: 0.7326
- type: mrr_at_20
value: 0.8019999999999999
- type: mrr_at_100
value: 0.9327
- type: mrr_at_1000
value: 1.069
- type: nauc_ndcg_at_1_max
value: 22.3169
- type: nauc_ndcg_at_1_std
value: -17.4758
- type: nauc_ndcg_at_1_diff1
value: 1.8166000000000002
- type: nauc_ndcg_at_3_max
value: 17.6929
- type: nauc_ndcg_at_3_std
value: 9.7291
- type: nauc_ndcg_at_3_diff1
value: 7.194599999999999
- type: nauc_ndcg_at_5_max
value: 14.1354
- type: nauc_ndcg_at_5_std
value: 13.7104
- type: nauc_ndcg_at_5_diff1
value: 8.8759
- type: nauc_ndcg_at_10_max
value: 21.5601
- type: nauc_ndcg_at_10_std
value: 16.240299999999998
- type: nauc_ndcg_at_10_diff1
value: 5.8809000000000005
- type: nauc_ndcg_at_20_max
value: 22.5519
- type: nauc_ndcg_at_20_std
value: 15.6586
- type: nauc_ndcg_at_20_diff1
value: 8.152099999999999
- type: nauc_ndcg_at_100_max
value: 18.656100000000002
- type: nauc_ndcg_at_100_std
value: 9.4551
- type: nauc_ndcg_at_100_diff1
value: 7.2737
- type: nauc_ndcg_at_1000_max
value: 11.1981
- type: nauc_ndcg_at_1000_std
value: 5.075699999999999
- type: nauc_ndcg_at_1000_diff1
value: 1.3835
- type: nauc_map_at_1_max
value: 22.3169
- type: nauc_map_at_1_std
value: -17.4758
- type: nauc_map_at_1_diff1
value: 1.8166000000000002
- type: nauc_map_at_3_max
value: 18.4824
- type: nauc_map_at_3_std
value: 4.9891
- type: nauc_map_at_3_diff1
value: 7.0646
- type: nauc_map_at_5_max
value: 15.9382
- type: nauc_map_at_5_std
value: 8.3427
- type: nauc_map_at_5_diff1
value: 8.2007
- type: nauc_map_at_10_max
value: 19.8876
- type: nauc_map_at_10_std
value: 10.2508
- type: nauc_map_at_10_diff1
value: 6.5514
- type: nauc_map_at_20_max
value: 20.333499999999997
- type: nauc_map_at_20_std
value: 10.3019
- type: nauc_map_at_20_diff1
value: 7.6846
- type: nauc_map_at_100_max
value: 19.386
- type: nauc_map_at_100_std
value: 9.1304
- type: nauc_map_at_100_diff1
value: 7.4995
- type: nauc_map_at_1000_max
value: 18.398
- type: nauc_map_at_1000_std
value: 8.7011
- type: nauc_map_at_1000_diff1
value: 6.6249
- type: nauc_recall_at_1_max
value: 22.3169
- type: nauc_recall_at_1_std
value: -17.4758
- type: nauc_recall_at_1_diff1
value: 1.8166000000000002
- type: nauc_recall_at_3_max
value: 16.0786
- type: nauc_recall_at_3_std
value: 19.45
- type: nauc_recall_at_3_diff1
value: 7.2306
- type: nauc_recall_at_5_max
value: 11.106
- type: nauc_recall_at_5_std
value: 22.3805
- type: nauc_recall_at_5_diff1
value: 9.905100000000001
- type: nauc_recall_at_10_max
value: 24.482599999999998
- type: nauc_recall_at_10_std
value: 23.9065
- type: nauc_recall_at_10_diff1
value: 4.6589
- type: nauc_recall_at_20_max
value: 25.4127
- type: nauc_recall_at_20_std
value: 20.5898
- type: nauc_recall_at_20_diff1
value: 8.5451
- type: nauc_recall_at_100_max
value: 17.8939
- type: nauc_recall_at_100_std
value: 8.286200000000001
- type: nauc_recall_at_100_diff1
value: 7.000299999999999
- type: nauc_recall_at_1000_max
value: 6.693499999999999
- type: nauc_recall_at_1000_std
value: 1.6481
- type: nauc_recall_at_1000_diff1
value: -1.6732
- type: nauc_precision_at_1_max
value: 22.3169
- type: nauc_precision_at_1_std
value: -17.4758
- type: nauc_precision_at_1_diff1
value: 1.8166000000000002
- type: nauc_precision_at_3_max
value: 16.0786
- type: nauc_precision_at_3_std
value: 19.45
- type: nauc_precision_at_3_diff1
value: 7.2306
- type: nauc_precision_at_5_max
value: 11.106
- type: nauc_precision_at_5_std
value: 22.3805
- type: nauc_precision_at_5_diff1
value: 9.905100000000001
- type: nauc_precision_at_10_max
value: 24.482599999999998
- type: nauc_precision_at_10_std
value: 23.9065
- type: nauc_precision_at_10_diff1
value: 4.6589
- type: nauc_precision_at_20_max
value: 25.4127
- type: nauc_precision_at_20_std
value: 20.5898
- type: nauc_precision_at_20_diff1
value: 8.5451
- type: nauc_precision_at_100_max
value: 17.8939
- type: nauc_precision_at_100_std
value: 8.286200000000001
- type: nauc_precision_at_100_diff1
value: 7.000299999999999
- type: nauc_precision_at_1000_max
value: 6.693499999999999
- type: nauc_precision_at_1000_std
value: 1.6481
- type: nauc_precision_at_1000_diff1
value: -1.6732
- type: nauc_mrr_at_1_max
value: 22.3169
- type: nauc_mrr_at_1_std
value: -17.4758
- type: nauc_mrr_at_1_diff1
value: 1.8166000000000002
- type: nauc_mrr_at_3_max
value: 18.4824
- type: nauc_mrr_at_3_std
value: 4.9891
- type: nauc_mrr_at_3_diff1
value: 7.0646
- type: nauc_mrr_at_5_max
value: 15.9382
- type: nauc_mrr_at_5_std
value: 8.3427
- type: nauc_mrr_at_5_diff1
value: 8.2007
- type: nauc_mrr_at_10_max
value: 19.8876
- type: nauc_mrr_at_10_std
value: 10.2508
- type: nauc_mrr_at_10_diff1
value: 6.5514
- type: nauc_mrr_at_20_max
value: 20.333499999999997
- type: nauc_mrr_at_20_std
value: 10.3019
- type: nauc_mrr_at_20_diff1
value: 7.6846
- type: nauc_mrr_at_100_max
value: 19.386
- type: nauc_mrr_at_100_std
value: 9.1304
- type: nauc_mrr_at_100_diff1
value: 7.4995
- type: nauc_mrr_at_1000_max
value: 18.398
- type: nauc_mrr_at_1000_std
value: 8.7011
- type: nauc_mrr_at_1000_diff1
value: 6.6249
- type: main_score
value: 0.9820000000000001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (zho-ara)
type: facebook/mlqa
config: zho-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.942
- type: ndcg_at_3
value: 1.093
- type: ndcg_at_5
value: 1.2189999999999999
- type: ndcg_at_10
value: 1.5010000000000001
- type: ndcg_at_20
value: 1.7500000000000002
- type: ndcg_at_100
value: 2.979
- type: ndcg_at_1000
value: 9.001000000000001
- type: map_at_1
value: 0.942
- type: map_at_3
value: 1.055
- type: map_at_5
value: 1.123
- type: map_at_10
value: 1.236
- type: map_at_20
value: 1.303
- type: map_at_100
value: 1.447
- type: map_at_1000
value: 1.587
- type: recall_at_1
value: 0.942
- type: recall_at_3
value: 1.204
- type: recall_at_5
value: 1.518
- type: recall_at_10
value: 2.407
- type: recall_at_20
value: 3.401
- type: recall_at_100
value: 10.413
- type: recall_at_1000
value: 63.239000000000004
- type: precision_at_1
value: 0.942
- type: precision_at_3
value: 0.40099999999999997
- type: precision_at_5
value: 0.304
- type: precision_at_10
value: 0.241
- type: precision_at_20
value: 0.16999999999999998
- type: precision_at_100
value: 0.104
- type: precision_at_1000
value: 0.063
- type: mrr_at_1
value: 0.9419000000000001
- type: mrr_at_3
value: 1.0553
- type: mrr_at_5
value: 1.1233
- type: mrr_at_10
value: 1.2364
- type: mrr_at_20
value: 1.3032
- type: mrr_at_100
value: 1.4472
- type: mrr_at_1000
value: 1.5868
- type: nauc_ndcg_at_1_max
value: 44.329
- type: nauc_ndcg_at_1_std
value: -22.1462
- type: nauc_ndcg_at_1_diff1
value: 54.6924
- type: nauc_ndcg_at_3_max
value: 44.3874
- type: nauc_ndcg_at_3_std
value: -12.476700000000001
- type: nauc_ndcg_at_3_diff1
value: 43.205799999999996
- type: nauc_ndcg_at_5_max
value: 40.2294
- type: nauc_ndcg_at_5_std
value: -7.8638
- type: nauc_ndcg_at_5_diff1
value: 41.3091
- type: nauc_ndcg_at_10_max
value: 38.2905
- type: nauc_ndcg_at_10_std
value: -5.8234
- type: nauc_ndcg_at_10_diff1
value: 35.6644
- type: nauc_ndcg_at_20_max
value: 32.7502
- type: nauc_ndcg_at_20_std
value: -3.6723
- type: nauc_ndcg_at_20_diff1
value: 32.0788
- type: nauc_ndcg_at_100_max
value: 18.657899999999998
- type: nauc_ndcg_at_100_std
value: 0.0926
- type: nauc_ndcg_at_100_diff1
value: 19.2937
- type: nauc_ndcg_at_1000_max
value: 12.2758
- type: nauc_ndcg_at_1000_std
value: -2.3555
- type: nauc_ndcg_at_1000_diff1
value: 13.314100000000002
- type: nauc_map_at_1_max
value: 44.329
- type: nauc_map_at_1_std
value: -22.1462
- type: nauc_map_at_1_diff1
value: 54.6924
- type: nauc_map_at_3_max
value: 44.405699999999996
- type: nauc_map_at_3_std
value: -14.424600000000002
- type: nauc_map_at_3_diff1
value: 45.6364
- type: nauc_map_at_5_max
value: 42.0327
- type: nauc_map_at_5_std
value: -11.7529
- type: nauc_map_at_5_diff1
value: 44.4403
- type: nauc_map_at_10_max
value: 40.7915
- type: nauc_map_at_10_std
value: -10.4077
- type: nauc_map_at_10_diff1
value: 41.1685
- type: nauc_map_at_20_max
value: 38.574799999999996
- type: nauc_map_at_20_std
value: -9.4044
- type: nauc_map_at_20_diff1
value: 39.5908
- type: nauc_map_at_100_max
value: 34.6009
- type: nauc_map_at_100_std
value: -7.71
- type: nauc_map_at_100_diff1
value: 35.6646
- type: nauc_map_at_1000_max
value: 33.46
- type: nauc_map_at_1000_std
value: -7.535500000000001
- type: nauc_map_at_1000_diff1
value: 34.6565
- type: nauc_recall_at_1_max
value: 44.329
- type: nauc_recall_at_1_std
value: -22.1462
- type: nauc_recall_at_1_diff1
value: 54.6924
- type: nauc_recall_at_3_max
value: 44.3297
- type: nauc_recall_at_3_std
value: -7.5964
- type: nauc_recall_at_3_diff1
value: 37.0708
- type: nauc_recall_at_5_max
value: 35.8238
- type: nauc_recall_at_5_std
value: 1.0823
- type: nauc_recall_at_5_diff1
value: 34.3532
- type: nauc_recall_at_10_max
value: 34.007
- type: nauc_recall_at_10_std
value: 1.8081
- type: nauc_recall_at_10_diff1
value: 26.466099999999997
- type: nauc_recall_at_20_max
value: 24.140900000000002
- type: nauc_recall_at_20_std
value: 4.0295
- type: nauc_recall_at_20_diff1
value: 21.781100000000002
- type: nauc_recall_at_100_max
value: 6.908499999999999
- type: nauc_recall_at_100_std
value: 4.5512
- type: nauc_recall_at_100_diff1
value: 7.940600000000001
- type: nauc_recall_at_1000_max
value: 0.2262
- type: nauc_recall_at_1000_std
value: -2.7483
- type: nauc_recall_at_1000_diff1
value: 1.2992
- type: nauc_precision_at_1_max
value: 44.329
- type: nauc_precision_at_1_std
value: -22.1462
- type: nauc_precision_at_1_diff1
value: 54.6924
- type: nauc_precision_at_3_max
value: 44.3297
- type: nauc_precision_at_3_std
value: -7.5964
- type: nauc_precision_at_3_diff1
value: 37.0708
- type: nauc_precision_at_5_max
value: 35.8238
- type: nauc_precision_at_5_std
value: 1.0823
- type: nauc_precision_at_5_diff1
value: 34.3532
- type: nauc_precision_at_10_max
value: 34.007
- type: nauc_precision_at_10_std
value: 1.8081
- type: nauc_precision_at_10_diff1
value: 26.466099999999997
- type: nauc_precision_at_20_max
value: 24.140900000000002
- type: nauc_precision_at_20_std
value: 4.0295
- type: nauc_precision_at_20_diff1
value: 21.781100000000002
- type: nauc_precision_at_100_max
value: 6.908499999999999
- type: nauc_precision_at_100_std
value: 4.5512
- type: nauc_precision_at_100_diff1
value: 7.940600000000001
- type: nauc_precision_at_1000_max
value: 0.3281
- type: nauc_precision_at_1000_std
value: -2.6999
- type: nauc_precision_at_1000_diff1
value: 1.2890000000000001
- type: nauc_mrr_at_1_max
value: 44.329
- type: nauc_mrr_at_1_std
value: -22.1462
- type: nauc_mrr_at_1_diff1
value: 54.6924
- type: nauc_mrr_at_3_max
value: 44.405699999999996
- type: nauc_mrr_at_3_std
value: -14.424600000000002
- type: nauc_mrr_at_3_diff1
value: 45.6364
- type: nauc_mrr_at_5_max
value: 42.0327
- type: nauc_mrr_at_5_std
value: -11.7529
- type: nauc_mrr_at_5_diff1
value: 44.4403
- type: nauc_mrr_at_10_max
value: 40.7915
- type: nauc_mrr_at_10_std
value: -10.4077
- type: nauc_mrr_at_10_diff1
value: 41.1685
- type: nauc_mrr_at_20_max
value: 38.574799999999996
- type: nauc_mrr_at_20_std
value: -9.4044
- type: nauc_mrr_at_20_diff1
value: 39.5908
- type: nauc_mrr_at_100_max
value: 34.6009
- type: nauc_mrr_at_100_std
value: -7.71
- type: nauc_mrr_at_100_diff1
value: 35.6646
- type: nauc_mrr_at_1000_max
value: 33.461800000000004
- type: nauc_mrr_at_1000_std
value: -7.5348
- type: nauc_mrr_at_1000_diff1
value: 34.6565
- type: main_score
value: 1.5010000000000001
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (ar)
type: jinaai/mintakaqa
config: ar
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: ndcg_at_1
value: 7.308000000000001
- type: ndcg_at_3
value: 10.071
- type: ndcg_at_5
value: 10.985
- type: ndcg_at_10
value: 12.306000000000001
- type: ndcg_at_20
value: 13.205
- type: ndcg_at_100
value: 14.701
- type: ndcg_at_1000
value: 20.005
- type: map_at_1
value: 7.308000000000001
- type: map_at_3
value: 9.366
- type: map_at_5
value: 9.872
- type: map_at_10
value: 10.424999999999999
- type: map_at_20
value: 10.674999999999999
- type: map_at_100
value: 10.859
- type: map_at_1000
value: 10.984
- type: recall_at_1
value: 7.308000000000001
- type: recall_at_3
value: 12.120000000000001
- type: recall_at_5
value: 14.344000000000001
- type: recall_at_10
value: 18.384
- type: recall_at_20
value: 21.925
- type: recall_at_100
value: 30.322
- type: recall_at_1000
value: 76.668
- type: precision_at_1
value: 7.308000000000001
- type: precision_at_3
value: 4.04
- type: precision_at_5
value: 2.869
- type: precision_at_10
value: 1.838
- type: precision_at_20
value: 1.0959999999999999
- type: precision_at_100
value: 0.303
- type: precision_at_1000
value: 0.077
- type: mrr_at_1
value: 7.308199999999999
- type: mrr_at_3
value: 9.366
- type: mrr_at_5
value: 9.8721
- type: mrr_at_10
value: 10.4255
- type: mrr_at_20
value: 10.6746
- type: mrr_at_100
value: 10.8587
- type: mrr_at_1000
value: 10.9839
- type: nauc_ndcg_at_1_max
value: 21.783
- type: nauc_ndcg_at_1_std
value: 20.8127
- type: nauc_ndcg_at_1_diff1
value: 21.791
- type: nauc_ndcg_at_3_max
value: 18.2102
- type: nauc_ndcg_at_3_std
value: 17.9469
- type: nauc_ndcg_at_3_diff1
value: 14.283399999999999
- type: nauc_ndcg_at_5_max
value: 18.4726
- type: nauc_ndcg_at_5_std
value: 19.3571
- type: nauc_ndcg_at_5_diff1
value: 13.2607
- type: nauc_ndcg_at_10_max
value: 18.5108
- type: nauc_ndcg_at_10_std
value: 21.5774
- type: nauc_ndcg_at_10_diff1
value: 11.7807
- type: nauc_ndcg_at_20_max
value: 18.4889
- type: nauc_ndcg_at_20_std
value: 22.3138
- type: nauc_ndcg_at_20_diff1
value: 12.0277
- type: nauc_ndcg_at_100_max
value: 17.5017
- type: nauc_ndcg_at_100_std
value: 21.1196
- type: nauc_ndcg_at_100_diff1
value: 11.5115
- type: nauc_ndcg_at_1000_max
value: 17.2058
- type: nauc_ndcg_at_1000_std
value: 20.3049
- type: nauc_ndcg_at_1000_diff1
value: 11.5737
- type: nauc_map_at_1_max
value: 21.783
- type: nauc_map_at_1_std
value: 20.8127
- type: nauc_map_at_1_diff1
value: 21.791
- type: nauc_map_at_3_max
value: 18.8523
- type: nauc_map_at_3_std
value: 18.4494
- type: nauc_map_at_3_diff1
value: 15.720899999999999
- type: nauc_map_at_5_max
value: 19.0264
- type: nauc_map_at_5_std
value: 19.329
- type: nauc_map_at_5_diff1
value: 15.057100000000002
- type: nauc_map_at_10_max
value: 19.038
- type: nauc_map_at_10_std
value: 20.3913
- type: nauc_map_at_10_diff1
value: 14.2778
- type: nauc_map_at_20_max
value: 19.0167
- type: nauc_map_at_20_std
value: 20.6651
- type: nauc_map_at_20_diff1
value: 14.2818
- type: nauc_map_at_100_max
value: 18.8506
- type: nauc_map_at_100_std
value: 20.5035
- type: nauc_map_at_100_diff1
value: 14.194300000000002
- type: nauc_map_at_1000_max
value: 18.814600000000002
- type: nauc_map_at_1000_std
value: 20.4537
- type: nauc_map_at_1000_diff1
value: 14.1742
- type: nauc_recall_at_1_max
value: 21.783
- type: nauc_recall_at_1_std
value: 20.8127
- type: nauc_recall_at_1_diff1
value: 21.791
- type: nauc_recall_at_3_max
value: 16.7429
- type: nauc_recall_at_3_std
value: 16.8033
- type: nauc_recall_at_3_diff1
value: 10.9673
- type: nauc_recall_at_5_max
value: 17.305400000000002
- type: nauc_recall_at_5_std
value: 19.543
- type: nauc_recall_at_5_diff1
value: 9.339
- type: nauc_recall_at_10_max
value: 17.5378
- type: nauc_recall_at_10_std
value: 24.3867
- type: nauc_recall_at_10_diff1
value: 6.776
- type: nauc_recall_at_20_max
value: 17.6106
- type: nauc_recall_at_20_std
value: 25.9784
- type: nauc_recall_at_20_diff1
value: 8.1176
- type: nauc_recall_at_100_max
value: 14.5343
- type: nauc_recall_at_100_std
value: 21.406
- type: nauc_recall_at_100_diff1
value: 6.8826
- type: nauc_recall_at_1000_max
value: 11.740200000000002
- type: nauc_recall_at_1000_std
value: 16.5951
- type: nauc_recall_at_1000_diff1
value: 5.6598999999999995
- type: nauc_precision_at_1_max
value: 21.783
- type: nauc_precision_at_1_std
value: 20.8127
- type: nauc_precision_at_1_diff1
value: 21.791
- type: nauc_precision_at_3_max
value: 16.7429
- type: nauc_precision_at_3_std
value: 16.8033
- type: nauc_precision_at_3_diff1
value: 10.9673
- type: nauc_precision_at_5_max
value: 17.305400000000002
- type: nauc_precision_at_5_std
value: 19.543
- type: nauc_precision_at_5_diff1
value: 9.339
- type: nauc_precision_at_10_max
value: 17.5378
- type: nauc_precision_at_10_std
value: 24.3867
- type: nauc_precision_at_10_diff1
value: 6.776
- type: nauc_precision_at_20_max
value: 17.6106
- type: nauc_precision_at_20_std
value: 25.9784
- type: nauc_precision_at_20_diff1
value: 8.1176
- type: nauc_precision_at_100_max
value: 14.5343
- type: nauc_precision_at_100_std
value: 21.406
- type: nauc_precision_at_100_diff1
value: 6.8826
- type: nauc_precision_at_1000_max
value: 11.740200000000002
- type: nauc_precision_at_1000_std
value: 16.5951
- type: nauc_precision_at_1000_diff1
value: 5.6598999999999995
- type: nauc_mrr_at_1_max
value: 21.783
- type: nauc_mrr_at_1_std
value: 20.8127
- type: nauc_mrr_at_1_diff1
value: 21.791
- type: nauc_mrr_at_3_max
value: 18.8523
- type: nauc_mrr_at_3_std
value: 18.4494
- type: nauc_mrr_at_3_diff1
value: 15.720899999999999
- type: nauc_mrr_at_5_max
value: 19.0264
- type: nauc_mrr_at_5_std
value: 19.329
- type: nauc_mrr_at_5_diff1
value: 15.057100000000002
- type: nauc_mrr_at_10_max
value: 19.038
- type: nauc_mrr_at_10_std
value: 20.3913
- type: nauc_mrr_at_10_diff1
value: 14.2778
- type: nauc_mrr_at_20_max
value: 19.0167
- type: nauc_mrr_at_20_std
value: 20.6651
- type: nauc_mrr_at_20_diff1
value: 14.2818
- type: nauc_mrr_at_100_max
value: 18.8506
- type: nauc_mrr_at_100_std
value: 20.5035
- type: nauc_mrr_at_100_diff1
value: 14.194300000000002
- type: nauc_mrr_at_1000_max
value: 18.814600000000002
- type: nauc_mrr_at_1000_std
value: 20.4537
- type: nauc_mrr_at_1000_diff1
value: 14.1742
- type: main_score
value: 12.306000000000001
- task:
type: Retrieval
dataset:
name: MTEB MrTidyRetrieval (arabic)
type: mteb/mrtidy
config: arabic
split: test
revision: fc24a3ce8f09746410daee3d5cd823ff7a0675b7
metrics:
- type: ndcg_at_1
value: 2.128
- type: ndcg_at_3
value: 2.632
- type: ndcg_at_5
value: 3.2329999999999997
- type: ndcg_at_10
value: 3.9469999999999996
- type: ndcg_at_20
value: 4.4479999999999995
- type: ndcg_at_100
value: 6.2330000000000005
- type: ndcg_at_1000
value: 8.812000000000001
- type: map_at_1
value: 1.989
- type: map_at_3
value: 2.444
- type: map_at_5
value: 2.786
- type: map_at_10
value: 3.078
- type: map_at_20
value: 3.2099999999999995
- type: map_at_100
value: 3.42
- type: map_at_1000
value: 3.497
- type: recall_at_1
value: 1.989
- type: recall_at_3
value: 3.006
- type: recall_at_5
value: 4.394
- type: recall_at_10
value: 6.614000000000001
- type: recall_at_20
value: 8.511000000000001
- type: recall_at_100
value: 18.378
- type: recall_at_1000
value: 39.300000000000004
- type: precision_at_1
value: 2.128
- type: precision_at_3
value: 1.079
- type: precision_at_5
value: 0.962
- type: precision_at_10
value: 0.712
- type: precision_at_20
value: 0.47200000000000003
- type: precision_at_100
value: 0.20500000000000002
- type: precision_at_1000
value: 0.044000000000000004
- type: mrr_at_1
value: 2.1277
- type: mrr_at_3
value: 2.621
- type: mrr_at_5
value: 2.9726
- type: mrr_at_10
value: 3.2579
- type: mrr_at_20
value: 3.4111000000000002
- type: mrr_at_100
value: 3.6346999999999996
- type: mrr_at_1000
value: 3.7098
- type: nauc_ndcg_at_1_max
value: 9.8338
- type: nauc_ndcg_at_1_std
value: -12.548
- type: nauc_ndcg_at_1_diff1
value: 23.988100000000003
- type: nauc_ndcg_at_3_max
value: 14.5487
- type: nauc_ndcg_at_3_std
value: -14.249400000000001
- type: nauc_ndcg_at_3_diff1
value: 24.1887
- type: nauc_ndcg_at_5_max
value: 15.2084
- type: nauc_ndcg_at_5_std
value: -12.0395
- type: nauc_ndcg_at_5_diff1
value: 21.9387
- type: nauc_ndcg_at_10_max
value: 16.49
- type: nauc_ndcg_at_10_std
value: -9.2455
- type: nauc_ndcg_at_10_diff1
value: 19.6085
- type: nauc_ndcg_at_20_max
value: 16.7376
- type: nauc_ndcg_at_20_std
value: -7.4205
- type: nauc_ndcg_at_20_diff1
value: 17.7278
- type: nauc_ndcg_at_100_max
value: 12.4233
- type: nauc_ndcg_at_100_std
value: -5.614800000000001
- type: nauc_ndcg_at_100_diff1
value: 14.599799999999998
- type: nauc_ndcg_at_1000_max
value: 14.0367
- type: nauc_ndcg_at_1000_std
value: -4.0573
- type: nauc_ndcg_at_1000_diff1
value: 15.4415
- type: nauc_map_at_1_max
value: 12.962499999999999
- type: nauc_map_at_1_std
value: -11.679599999999999
- type: nauc_map_at_1_diff1
value: 24.3343
- type: nauc_map_at_3_max
value: 14.8937
- type: nauc_map_at_3_std
value: -13.460700000000001
- type: nauc_map_at_3_diff1
value: 24.3587
- type: nauc_map_at_5_max
value: 15.174299999999999
- type: nauc_map_at_5_std
value: -12.3433
- type: nauc_map_at_5_diff1
value: 22.753899999999998
- type: nauc_map_at_10_max
value: 15.7631
- type: nauc_map_at_10_std
value: -10.7924
- type: nauc_map_at_10_diff1
value: 21.3339
- type: nauc_map_at_20_max
value: 15.8264
- type: nauc_map_at_20_std
value: -10.1158
- type: nauc_map_at_20_diff1
value: 20.6053
- type: nauc_map_at_100_max
value: 14.8213
- type: nauc_map_at_100_std
value: -9.7321
- type: nauc_map_at_100_diff1
value: 19.7135
- type: nauc_map_at_1000_max
value: 14.8924
- type: nauc_map_at_1000_std
value: -9.5351
- type: nauc_map_at_1000_diff1
value: 19.6631
- type: nauc_recall_at_1_max
value: 12.962499999999999
- type: nauc_recall_at_1_std
value: -11.679599999999999
- type: nauc_recall_at_1_diff1
value: 24.3343
- type: nauc_recall_at_3_max
value: 16.7586
- type: nauc_recall_at_3_std
value: -15.3483
- type: nauc_recall_at_3_diff1
value: 25.061899999999998
- type: nauc_recall_at_5_max
value: 17.8571
- type: nauc_recall_at_5_std
value: -11.274099999999999
- type: nauc_recall_at_5_diff1
value: 21.6014
- type: nauc_recall_at_10_max
value: 19.5196
- type: nauc_recall_at_10_std
value: -6.507899999999999
- type: nauc_recall_at_10_diff1
value: 17.893
- type: nauc_recall_at_20_max
value: 19.6178
- type: nauc_recall_at_20_std
value: -3.0103999999999997
- type: nauc_recall_at_20_diff1
value: 14.6408
- type: nauc_recall_at_100_max
value: 10.41
- type: nauc_recall_at_100_std
value: -0.7312
- type: nauc_recall_at_100_diff1
value: 10.3312
- type: nauc_recall_at_1000_max
value: 15.058
- type: nauc_recall_at_1000_std
value: 1.5328
- type: nauc_recall_at_1000_diff1
value: 13.9017
- type: nauc_precision_at_1_max
value: 9.8338
- type: nauc_precision_at_1_std
value: -12.548
- type: nauc_precision_at_1_diff1
value: 23.988100000000003
- type: nauc_precision_at_3_max
value: 12.634699999999999
- type: nauc_precision_at_3_std
value: -16.3304
- type: nauc_precision_at_3_diff1
value: 22.9192
- type: nauc_precision_at_5_max
value: 12.7579
- type: nauc_precision_at_5_std
value: -11.520199999999999
- type: nauc_precision_at_5_diff1
value: 17.8422
- type: nauc_precision_at_10_max
value: 15.9994
- type: nauc_precision_at_10_std
value: -6.447700000000001
- type: nauc_precision_at_10_diff1
value: 15.634799999999998
- type: nauc_precision_at_20_max
value: 16.1337
- type: nauc_precision_at_20_std
value: -3.8893999999999997
- type: nauc_precision_at_20_diff1
value: 11.8299
- type: nauc_precision_at_100_max
value: 7.0385
- type: nauc_precision_at_100_std
value: -2.4169
- type: nauc_precision_at_100_diff1
value: 7.9619
- type: nauc_precision_at_1000_max
value: 11.1822
- type: nauc_precision_at_1000_std
value: -0.7087
- type: nauc_precision_at_1000_diff1
value: 11.1584
- type: nauc_mrr_at_1_max
value: 9.8338
- type: nauc_mrr_at_1_std
value: -12.548
- type: nauc_mrr_at_1_diff1
value: 23.988100000000003
- type: nauc_mrr_at_3_max
value: 11.2985
- type: nauc_mrr_at_3_std
value: -14.4349
- type: nauc_mrr_at_3_diff1
value: 23.0904
- type: nauc_mrr_at_5_max
value: 11.9144
- type: nauc_mrr_at_5_std
value: -12.544
- type: nauc_mrr_at_5_diff1
value: 21.580099999999998
- type: nauc_mrr_at_10_max
value: 12.802299999999999
- type: nauc_mrr_at_10_std
value: -11.1495
- type: nauc_mrr_at_10_diff1
value: 20.1189
- type: nauc_mrr_at_20_max
value: 13.0409
- type: nauc_mrr_at_20_std
value: -10.516399999999999
- type: nauc_mrr_at_20_diff1
value: 19.3462
- type: nauc_mrr_at_100_max
value: 12.0976
- type: nauc_mrr_at_100_std
value: -10.1146
- type: nauc_mrr_at_100_diff1
value: 18.3944
- type: nauc_mrr_at_1000_max
value: 12.155100000000001
- type: nauc_mrr_at_1000_std
value: -9.9877
- type: nauc_mrr_at_1000_diff1
value: 18.390500000000003
- type: main_score
value: 3.9469999999999996
- task:
type: Retrieval
dataset:
name: MTEB SadeemQuestionRetrieval (default)
type: sadeem-ai/sadeem-ar-eval-retrieval-questions
config: default
split: test
revision: 3cb0752b182e5d5d740df547748b06663c8e0bd9
metrics:
- type: ndcg_at_1
value: 19.435
- type: ndcg_at_3
value: 42.789
- type: ndcg_at_5
value: 44.798
- type: ndcg_at_10
value: 46.705999999999996
- type: ndcg_at_20
value: 48.193000000000005
- type: ndcg_at_100
value: 49.882
- type: ndcg_at_1000
value: 50.924
- type: map_at_1
value: 19.435
- type: map_at_3
value: 36.596000000000004
- type: map_at_5
value: 37.721
- type: map_at_10
value: 38.521
- type: map_at_20
value: 38.934999999999995
- type: map_at_100
value: 39.169
- type: map_at_1000
value: 39.205
- type: recall_at_1
value: 19.435
- type: recall_at_3
value: 60.89
- type: recall_at_5
value: 65.725
- type: recall_at_10
value: 71.565
- type: recall_at_20
value: 77.405
- type: recall_at_100
value: 86.50099999999999
- type: recall_at_1000
value: 94.926
- type: precision_at_1
value: 19.435
- type: precision_at_3
value: 20.297
- type: precision_at_5
value: 13.145000000000001
- type: precision_at_10
value: 7.156999999999999
- type: precision_at_20
value: 3.8699999999999997
- type: precision_at_100
value: 0.865
- type: precision_at_1000
value: 0.095
- type: mrr_at_1
value: 17.8076
- type: mrr_at_3
value: 35.4875
- type: mrr_at_5
value: 36.78
- type: mrr_at_10
value: 37.5405
- type: mrr_at_20
value: 37.966
- type: mrr_at_100
value: 38.1923
- type: mrr_at_1000
value: 38.2282
- type: nauc_ndcg_at_1_max
value: 33.4563
- type: nauc_ndcg_at_1_std
value: 14.063300000000002
- type: nauc_ndcg_at_1_diff1
value: -29.665999999999997
- type: nauc_ndcg_at_3_max
value: 55.5122
- type: nauc_ndcg_at_3_std
value: 23.3885
- type: nauc_ndcg_at_3_diff1
value: -60.501099999999994
- type: nauc_ndcg_at_5_max
value: 54.832499999999996
- type: nauc_ndcg_at_5_std
value: 23.6066
- type: nauc_ndcg_at_5_diff1
value: -57.5511
- type: nauc_ndcg_at_10_max
value: 54.089600000000004
- type: nauc_ndcg_at_10_std
value: 23.9497
- type: nauc_ndcg_at_10_diff1
value: -55.457699999999996
- type: nauc_ndcg_at_20_max
value: 53.3345
- type: nauc_ndcg_at_20_std
value: 24.313399999999998
- type: nauc_ndcg_at_20_diff1
value: -54.1937
- type: nauc_ndcg_at_100_max
value: 52.2829
- type: nauc_ndcg_at_100_std
value: 24.3924
- type: nauc_ndcg_at_100_diff1
value: -52.9938
- type: nauc_ndcg_at_1000_max
value: 51.5458
- type: nauc_ndcg_at_1000_std
value: 23.4862
- type: nauc_ndcg_at_1000_diff1
value: -51.9041
- type: nauc_map_at_1_max
value: 33.4563
- type: nauc_map_at_1_std
value: 14.063300000000002
- type: nauc_map_at_1_diff1
value: -29.665999999999997
- type: nauc_map_at_3_max
value: 49.4643
- type: nauc_map_at_3_std
value: 20.686
- type: nauc_map_at_3_diff1
value: -51.4965
- type: nauc_map_at_5_max
value: 48.976
- type: nauc_map_at_5_std
value: 20.7495
- type: nauc_map_at_5_diff1
value: -49.645
- type: nauc_map_at_10_max
value: 48.5698
- type: nauc_map_at_10_std
value: 20.8694
- type: nauc_map_at_10_diff1
value: -48.673100000000005
- type: nauc_map_at_20_max
value: 48.3171
- type: nauc_map_at_20_std
value: 20.951900000000002
- type: nauc_map_at_20_diff1
value: -48.2722
- type: nauc_map_at_100_max
value: 48.1488
- type: nauc_map_at_100_std
value: 20.9507
- type: nauc_map_at_100_diff1
value: -48.0933
- type: nauc_map_at_1000_max
value: 48.1232
- type: nauc_map_at_1000_std
value: 20.9226
- type: nauc_map_at_1000_diff1
value: -48.0486
- type: nauc_recall_at_1_max
value: 33.4563
- type: nauc_recall_at_1_std
value: 14.063300000000002
- type: nauc_recall_at_1_diff1
value: -29.665999999999997
- type: nauc_recall_at_3_max
value: 73.1441
- type: nauc_recall_at_3_std
value: 31.3154
- type: nauc_recall_at_3_diff1
value: -86.93469999999999
- type: nauc_recall_at_5_max
value: 73.0428
- type: nauc_recall_at_5_std
value: 32.6181
- type: nauc_recall_at_5_diff1
value: -82.15289999999999
- type: nauc_recall_at_10_max
value: 73.0875
- type: nauc_recall_at_10_std
value: 34.933
- type: nauc_recall_at_10_diff1
value: -78.28
- type: nauc_recall_at_20_max
value: 73.03150000000001
- type: nauc_recall_at_20_std
value: 38.8894
- type: nauc_recall_at_20_diff1
value: -76.3884
- type: nauc_recall_at_100_max
value: 73.2723
- type: nauc_recall_at_100_std
value: 47.7568
- type: nauc_recall_at_100_diff1
value: -75.98169999999999
- type: nauc_recall_at_1000_max
value: 76.5266
- type: nauc_recall_at_1000_std
value: 47.3315
- type: nauc_recall_at_1000_diff1
value: -70.95139999999999
- type: nauc_precision_at_1_max
value: 33.4563
- type: nauc_precision_at_1_std
value: 14.063300000000002
- type: nauc_precision_at_1_diff1
value: -29.665999999999997
- type: nauc_precision_at_3_max
value: 73.1441
- type: nauc_precision_at_3_std
value: 31.3154
- type: nauc_precision_at_3_diff1
value: -86.93469999999999
- type: nauc_precision_at_5_max
value: 73.0428
- type: nauc_precision_at_5_std
value: 32.6181
- type: nauc_precision_at_5_diff1
value: -82.15289999999999
- type: nauc_precision_at_10_max
value: 73.0875
- type: nauc_precision_at_10_std
value: 34.933
- type: nauc_precision_at_10_diff1
value: -78.28
- type: nauc_precision_at_20_max
value: 73.03150000000001
- type: nauc_precision_at_20_std
value: 38.8894
- type: nauc_precision_at_20_diff1
value: -76.3884
- type: nauc_precision_at_100_max
value: 73.2723
- type: nauc_precision_at_100_std
value: 47.7568
- type: nauc_precision_at_100_diff1
value: -75.98169999999999
- type: nauc_precision_at_1000_max
value: 76.5266
- type: nauc_precision_at_1000_std
value: 47.3315
- type: nauc_precision_at_1000_diff1
value: -70.95139999999999
- type: nauc_mrr_at_1_max
value: 28.7221
- type: nauc_mrr_at_1_std
value: 11.3037
- type: nauc_mrr_at_1_diff1
value: -36.5891
- type: nauc_mrr_at_3_max
value: 47.3382
- type: nauc_mrr_at_3_std
value: 19.6286
- type: nauc_mrr_at_3_diff1
value: -57.08689999999999
- type: nauc_mrr_at_5_max
value: 46.6486
- type: nauc_mrr_at_5_std
value: 19.6178
- type: nauc_mrr_at_5_diff1
value: -55.2681
- type: nauc_mrr_at_10_max
value: 46.0209
- type: nauc_mrr_at_10_std
value: 19.5032
- type: nauc_mrr_at_10_diff1
value: -54.3868
- type: nauc_mrr_at_20_max
value: 45.729
- type: nauc_mrr_at_20_std
value: 19.4986
- type: nauc_mrr_at_20_diff1
value: -53.967699999999994
- type: nauc_mrr_at_100_max
value: 45.5478
- type: nauc_mrr_at_100_std
value: 19.484299999999998
- type: nauc_mrr_at_100_diff1
value: -53.8288
- type: nauc_mrr_at_1000_max
value: 45.5182
- type: nauc_mrr_at_1000_std
value: 19.453400000000002
- type: nauc_mrr_at_1000_diff1
value: -53.7893
- type: main_score
value: 46.705999999999996
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (ara-ara)
type: jinaai/xpqa
config: ara-ara
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_3
value: 20.547
- type: ndcg_at_5
value: 21.232
- type: ndcg_at_10
value: 23.518
- type: ndcg_at_20
value: 25.659
- type: ndcg_at_100
value: 29.643000000000004
- type: ndcg_at_1000
value: 34.81
- type: map_at_1
value: 10.544
- type: map_at_3
value: 16.2
- type: map_at_5
value: 17.743000000000002
- type: map_at_10
value: 18.951
- type: map_at_20
value: 19.704
- type: map_at_100
value: 20.355
- type: map_at_1000
value: 20.569000000000003
- type: recall_at_1
value: 10.544
- type: recall_at_3
value: 19.32
- type: recall_at_5
value: 23.355999999999998
- type: recall_at_10
value: 28.951
- type: recall_at_20
value: 35.878
- type: recall_at_100
value: 54.496
- type: recall_at_1000
value: 90.958
- type: precision_at_1
value: 20.8
- type: precision_at_3
value: 14.133000000000001
- type: precision_at_5
value: 10.453
- type: precision_at_10
value: 6.52
- type: precision_at_20
value: 4.0
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.186
- type: mrr_at_1
value: 20.8
- type: mrr_at_3
value: 24.8444
- type: mrr_at_5
value: 25.7911
- type: mrr_at_10
value: 26.5573
- type: mrr_at_20
value: 27.030500000000004
- type: mrr_at_100
value: 27.4134
- type: mrr_at_1000
value: 27.528799999999997
- type: nauc_ndcg_at_1_max
value: 31.7051
- type: nauc_ndcg_at_1_std
value: 1.2411999999999999
- type: nauc_ndcg_at_1_diff1
value: 33.0747
- type: nauc_ndcg_at_3_max
value: 30.142400000000002
- type: nauc_ndcg_at_3_std
value: -0.9313999999999999
- type: nauc_ndcg_at_3_diff1
value: 26.7065
- type: nauc_ndcg_at_5_max
value: 29.7749
- type: nauc_ndcg_at_5_std
value: 0.0249
- type: nauc_ndcg_at_5_diff1
value: 26.8829
- type: nauc_ndcg_at_10_max
value: 30.777500000000003
- type: nauc_ndcg_at_10_std
value: 0.7138
- type: nauc_ndcg_at_10_diff1
value: 26.270599999999998
- type: nauc_ndcg_at_20_max
value: 30.8149
- type: nauc_ndcg_at_20_std
value: 0.7107
- type: nauc_ndcg_at_20_diff1
value: 26.0781
- type: nauc_ndcg_at_100_max
value: 30.1661
- type: nauc_ndcg_at_100_std
value: 1.4445
- type: nauc_ndcg_at_100_diff1
value: 25.7807
- type: nauc_ndcg_at_1000_max
value: 31.0257
- type: nauc_ndcg_at_1000_std
value: 1.8606999999999998
- type: nauc_ndcg_at_1000_diff1
value: 27.2222
- type: nauc_map_at_1_max
value: 17.7301
- type: nauc_map_at_1_std
value: -3.6554999999999995
- type: nauc_map_at_1_diff1
value: 31.9805
- type: nauc_map_at_3_max
value: 27.411400000000004
- type: nauc_map_at_3_std
value: -2.1001
- type: nauc_map_at_3_diff1
value: 26.7978
- type: nauc_map_at_5_max
value: 28.4826
- type: nauc_map_at_5_std
value: -1.5623
- type: nauc_map_at_5_diff1
value: 26.6386
- type: nauc_map_at_10_max
value: 29.229300000000002
- type: nauc_map_at_10_std
value: -1.2293
- type: nauc_map_at_10_diff1
value: 26.287
- type: nauc_map_at_20_max
value: 29.4007
- type: nauc_map_at_20_std
value: -1.0069
- type: nauc_map_at_20_diff1
value: 26.114900000000002
- type: nauc_map_at_100_max
value: 29.5016
- type: nauc_map_at_100_std
value: -0.8401000000000001
- type: nauc_map_at_100_diff1
value: 26.247300000000003
- type: nauc_map_at_1000_max
value: 29.5489
- type: nauc_map_at_1000_std
value: -0.762
- type: nauc_map_at_1000_diff1
value: 26.3015
- type: nauc_recall_at_1_max
value: 17.7301
- type: nauc_recall_at_1_std
value: -3.6554999999999995
- type: nauc_recall_at_1_diff1
value: 31.9805
- type: nauc_recall_at_3_max
value: 26.789099999999998
- type: nauc_recall_at_3_std
value: -1.087
- type: nauc_recall_at_3_diff1
value: 22.7132
- type: nauc_recall_at_5_max
value: 27.6821
- type: nauc_recall_at_5_std
value: 1.043
- type: nauc_recall_at_5_diff1
value: 23.6854
- type: nauc_recall_at_10_max
value: 28.6304
- type: nauc_recall_at_10_std
value: 1.8037
- type: nauc_recall_at_10_diff1
value: 21.7246
- type: nauc_recall_at_20_max
value: 27.939199999999996
- type: nauc_recall_at_20_std
value: 0.9745
- type: nauc_recall_at_20_diff1
value: 20.9084
- type: nauc_recall_at_100_max
value: 23.5267
- type: nauc_recall_at_100_std
value: 3.2817
- type: nauc_recall_at_100_diff1
value: 17.907
- type: nauc_recall_at_1000_max
value: 35.5056
- type: nauc_recall_at_1000_std
value: 8.5216
- type: nauc_recall_at_1000_diff1
value: 36.6571
- type: nauc_precision_at_1_max
value: 31.7051
- type: nauc_precision_at_1_std
value: 1.2411999999999999
- type: nauc_precision_at_1_diff1
value: 33.0747
- type: nauc_precision_at_3_max
value: 38.2081
- type: nauc_precision_at_3_std
value: 1.3497000000000001
- type: nauc_precision_at_3_diff1
value: 22.3155
- type: nauc_precision_at_5_max
value: 38.367200000000004
- type: nauc_precision_at_5_std
value: 2.781
- type: nauc_precision_at_5_diff1
value: 21.5532
- type: nauc_precision_at_10_max
value: 37.7538
- type: nauc_precision_at_10_std
value: 4.7659
- type: nauc_precision_at_10_diff1
value: 19.6003
- type: nauc_precision_at_20_max
value: 35.1427
- type: nauc_precision_at_20_std
value: 5.5358
- type: nauc_precision_at_20_diff1
value: 17.808
- type: nauc_precision_at_100_max
value: 29.7634
- type: nauc_precision_at_100_std
value: 7.9015
- type: nauc_precision_at_100_diff1
value: 14.9111
- type: nauc_precision_at_1000_max
value: 21.906100000000002
- type: nauc_precision_at_1000_std
value: 8.9498
- type: nauc_precision_at_1000_diff1
value: 12.1544
- type: nauc_mrr_at_1_max
value: 31.7051
- type: nauc_mrr_at_1_std
value: 1.2411999999999999
- type: nauc_mrr_at_1_diff1
value: 33.0747
- type: nauc_mrr_at_3_max
value: 31.278200000000002
- type: nauc_mrr_at_3_std
value: 1.3494000000000002
- type: nauc_mrr_at_3_diff1
value: 29.066599999999998
- type: nauc_mrr_at_5_max
value: 31.5683
- type: nauc_mrr_at_5_std
value: 1.9106
- type: nauc_mrr_at_5_diff1
value: 29.5798
- type: nauc_mrr_at_10_max
value: 31.744600000000002
- type: nauc_mrr_at_10_std
value: 2.4455999999999998
- type: nauc_mrr_at_10_diff1
value: 29.1437
- type: nauc_mrr_at_20_max
value: 31.5781
- type: nauc_mrr_at_20_std
value: 2.2138
- type: nauc_mrr_at_20_diff1
value: 29.279899999999998
- type: nauc_mrr_at_100_max
value: 31.435000000000002
- type: nauc_mrr_at_100_std
value: 2.2043
- type: nauc_mrr_at_100_diff1
value: 29.216199999999997
- type: nauc_mrr_at_1000_max
value: 31.465799999999998
- type: nauc_mrr_at_1000_std
value: 2.2215
- type: nauc_mrr_at_1000_diff1
value: 29.2512
- type: main_score
value: 23.518
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (eng-ara)
type: jinaai/xpqa
config: eng-ara
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 1.2
- type: ndcg_at_3
value: 1.1860000000000002
- type: ndcg_at_5
value: 1.3050000000000002
- type: ndcg_at_10
value: 1.6969999999999998
- type: ndcg_at_20
value: 2.044
- type: ndcg_at_100
value: 3.5069999999999997
- type: ndcg_at_1000
value: 11.62
- type: map_at_1
value: 0.656
- type: map_at_3
value: 0.903
- type: map_at_5
value: 1.051
- type: map_at_10
value: 1.189
- type: map_at_20
value: 1.2850000000000001
- type: map_at_100
value: 1.452
- type: map_at_1000
value: 1.6729999999999998
- type: recall_at_1
value: 0.656
- type: recall_at_3
value: 1.0290000000000001
- type: recall_at_5
value: 1.46
- type: recall_at_10
value: 2.478
- type: recall_at_20
value: 3.6639999999999997
- type: recall_at_100
value: 10.453
- type: recall_at_1000
value: 68.58
- type: precision_at_1
value: 1.2
- type: precision_at_3
value: 0.844
- type: precision_at_5
value: 0.6930000000000001
- type: precision_at_10
value: 0.573
- type: precision_at_20
value: 0.393
- type: precision_at_100
value: 0.22399999999999998
- type: precision_at_1000
value: 0.147
- type: mrr_at_1
value: 1.2
- type: mrr_at_3
value: 1.5778
- type: mrr_at_5
value: 1.6978
- type: mrr_at_10
value: 1.9314999999999998
- type: mrr_at_20
value: 2.0536
- type: mrr_at_100
value: 2.2948
- type: mrr_at_1000
value: 2.4878
- type: nauc_ndcg_at_1_max
value: 74.081
- type: nauc_ndcg_at_1_std
value: 5.8313
- type: nauc_ndcg_at_1_diff1
value: 62.427299999999995
- type: nauc_ndcg_at_3_max
value: 65.3629
- type: nauc_ndcg_at_3_std
value: 6.7885
- type: nauc_ndcg_at_3_diff1
value: 54.3825
- type: nauc_ndcg_at_5_max
value: 63.497099999999996
- type: nauc_ndcg_at_5_std
value: 7.2825
- type: nauc_ndcg_at_5_diff1
value: 49.7187
- type: nauc_ndcg_at_10_max
value: 52.3784
- type: nauc_ndcg_at_10_std
value: 3.5996
- type: nauc_ndcg_at_10_diff1
value: 38.3057
- type: nauc_ndcg_at_20_max
value: 47.599799999999995
- type: nauc_ndcg_at_20_std
value: 2.8116
- type: nauc_ndcg_at_20_diff1
value: 35.433
- type: nauc_ndcg_at_100_max
value: 33.6852
- type: nauc_ndcg_at_100_std
value: 4.1317
- type: nauc_ndcg_at_100_diff1
value: 21.5679
- type: nauc_ndcg_at_1000_max
value: 24.516
- type: nauc_ndcg_at_1000_std
value: 5.9024
- type: nauc_ndcg_at_1000_diff1
value: 15.1338
- type: nauc_map_at_1_max
value: 85.331
- type: nauc_map_at_1_std
value: 18.3235
- type: nauc_map_at_1_diff1
value: 80.762
- type: nauc_map_at_3_max
value: 75.1557
- type: nauc_map_at_3_std
value: 11.3855
- type: nauc_map_at_3_diff1
value: 69.277
- type: nauc_map_at_5_max
value: 70.8756
- type: nauc_map_at_5_std
value: 8.223700000000001
- type: nauc_map_at_5_diff1
value: 61.6509
- type: nauc_map_at_10_max
value: 64.0045
- type: nauc_map_at_10_std
value: 6.1125
- type: nauc_map_at_10_diff1
value: 54.5543
- type: nauc_map_at_20_max
value: 61.04619999999999
- type: nauc_map_at_20_std
value: 5.5213
- type: nauc_map_at_20_diff1
value: 52.05309999999999
- type: nauc_map_at_100_max
value: 55.69
- type: nauc_map_at_100_std
value: 5.2997000000000005
- type: nauc_map_at_100_diff1
value: 46.5183
- type: nauc_map_at_1000_max
value: 53.2733
- type: nauc_map_at_1000_std
value: 5.3787
- type: nauc_map_at_1000_diff1
value: 44.2553
- type: nauc_recall_at_1_max
value: 85.331
- type: nauc_recall_at_1_std
value: 18.3235
- type: nauc_recall_at_1_diff1
value: 80.762
- type: nauc_recall_at_3_max
value: 68.1551
- type: nauc_recall_at_3_std
value: 12.2398
- type: nauc_recall_at_3_diff1
value: 60.7436
- type: nauc_recall_at_5_max
value: 62.2638
- type: nauc_recall_at_5_std
value: 8.578
- type: nauc_recall_at_5_diff1
value: 42.3461
- type: nauc_recall_at_10_max
value: 42.8151
- type: nauc_recall_at_10_std
value: 1.034
- type: nauc_recall_at_10_diff1
value: 23.8109
- type: nauc_recall_at_20_max
value: 36.9734
- type: nauc_recall_at_20_std
value: 0.9624
- type: nauc_recall_at_20_diff1
value: 22.0584
- type: nauc_recall_at_100_max
value: 21.0573
- type: nauc_recall_at_100_std
value: 3.7708
- type: nauc_recall_at_100_diff1
value: 7.7184
- type: nauc_recall_at_1000_max
value: 8.8652
- type: nauc_recall_at_1000_std
value: 5.3474
- type: nauc_recall_at_1000_diff1
value: 7.3409
- type: nauc_precision_at_1_max
value: 74.081
- type: nauc_precision_at_1_std
value: 5.8313
- type: nauc_precision_at_1_diff1
value: 62.427299999999995
- type: nauc_precision_at_3_max
value: 51.821
- type: nauc_precision_at_3_std
value: -1.3345
- type: nauc_precision_at_3_diff1
value: 37.6809
- type: nauc_precision_at_5_max
value: 45.9495
- type: nauc_precision_at_5_std
value: -1.6027
- type: nauc_precision_at_5_diff1
value: 30.794
- type: nauc_precision_at_10_max
value: 34.2635
- type: nauc_precision_at_10_std
value: -4.0278
- type: nauc_precision_at_10_diff1
value: 19.223000000000003
- type: nauc_precision_at_20_max
value: 30.588500000000003
- type: nauc_precision_at_20_std
value: -5.0488
- type: nauc_precision_at_20_diff1
value: 20.971999999999998
- type: nauc_precision_at_100_max
value: 18.7883
- type: nauc_precision_at_100_std
value: 3.4913
- type: nauc_precision_at_100_diff1
value: 9.4293
- type: nauc_precision_at_1000_max
value: 5.8584
- type: nauc_precision_at_1000_std
value: 6.8013
- type: nauc_precision_at_1000_diff1
value: -2.4122
- type: nauc_mrr_at_1_max
value: 74.081
- type: nauc_mrr_at_1_std
value: 5.8313
- type: nauc_mrr_at_1_diff1
value: 62.427299999999995
- type: nauc_mrr_at_3_max
value: 58.44819999999999
- type: nauc_mrr_at_3_std
value: 3.6037
- type: nauc_mrr_at_3_diff1
value: 42.664699999999996
- type: nauc_mrr_at_5_max
value: 56.606100000000005
- type: nauc_mrr_at_5_std
value: 4.3769
- type: nauc_mrr_at_5_diff1
value: 39.446799999999996
- type: nauc_mrr_at_10_max
value: 52.283699999999996
- type: nauc_mrr_at_10_std
value: 3.3348000000000004
- type: nauc_mrr_at_10_diff1
value: 35.186099999999996
- type: nauc_mrr_at_20_max
value: 50.6598
- type: nauc_mrr_at_20_std
value: 3.1269
- type: nauc_mrr_at_20_diff1
value: 34.930099999999996
- type: nauc_mrr_at_100_max
value: 46.7037
- type: nauc_mrr_at_100_std
value: 3.2654
- type: nauc_mrr_at_100_diff1
value: 31.1309
- type: nauc_mrr_at_1000_max
value: 46.1128
- type: nauc_mrr_at_1000_std
value: 3.3853
- type: nauc_mrr_at_1000_diff1
value: 30.3609
- type: main_score
value: 1.6969999999999998
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (ara-eng)
type: jinaai/xpqa
config: ara-eng
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 1.617
- type: ndcg_at_3
value: 1.8159999999999998
- type: ndcg_at_5
value: 1.9869999999999999
- type: ndcg_at_10
value: 2.394
- type: ndcg_at_20
value: 2.724
- type: ndcg_at_100
value: 4.2909999999999995
- type: ndcg_at_1000
value: 12.857
- type: map_at_1
value: 0.903
- type: map_at_3
value: 1.421
- type: map_at_5
value: 1.5610000000000002
- type: map_at_10
value: 1.7420000000000002
- type: map_at_20
value: 1.828
- type: map_at_100
value: 2.016
- type: map_at_1000
value: 2.259
- type: recall_at_1
value: 0.903
- type: recall_at_3
value: 1.923
- type: recall_at_5
value: 2.4330000000000003
- type: recall_at_10
value: 3.4819999999999998
- type: recall_at_20
value: 4.5440000000000005
- type: recall_at_100
value: 11.846
- type: recall_at_1000
value: 74.371
- type: precision_at_1
value: 1.617
- type: precision_at_3
value: 1.168
- type: precision_at_5
value: 0.889
- type: precision_at_10
value: 0.647
- type: precision_at_20
value: 0.438
- type: precision_at_100
value: 0.244
- type: precision_at_1000
value: 0.146
- type: mrr_at_1
value: 1.6173
- type: mrr_at_3
value: 2.2686
- type: mrr_at_5
value: 2.3899
- type: mrr_at_10
value: 2.5806
- type: mrr_at_20
value: 2.7121
- type: mrr_at_100
value: 2.9324
- type: mrr_at_1000
value: 3.1441
- type: nauc_ndcg_at_1_max
value: 41.4733
- type: nauc_ndcg_at_1_std
value: 34.5204
- type: nauc_ndcg_at_1_diff1
value: 38.8662
- type: nauc_ndcg_at_3_max
value: 41.3135
- type: nauc_ndcg_at_3_std
value: 40.0385
- type: nauc_ndcg_at_3_diff1
value: 36.750899999999994
- type: nauc_ndcg_at_5_max
value: 42.9281
- type: nauc_ndcg_at_5_std
value: 39.9347
- type: nauc_ndcg_at_5_diff1
value: 35.3783
- type: nauc_ndcg_at_10_max
value: 42.743900000000004
- type: nauc_ndcg_at_10_std
value: 41.6663
- type: nauc_ndcg_at_10_diff1
value: 31.0463
- type: nauc_ndcg_at_20_max
value: 43.5237
- type: nauc_ndcg_at_20_std
value: 39.6809
- type: nauc_ndcg_at_20_diff1
value: 32.651
- type: nauc_ndcg_at_100_max
value: 33.3655
- type: nauc_ndcg_at_100_std
value: 32.0311
- type: nauc_ndcg_at_100_diff1
value: 28.723399999999998
- type: nauc_ndcg_at_1000_max
value: 31.1311
- type: nauc_ndcg_at_1000_std
value: 28.838900000000002
- type: nauc_ndcg_at_1000_diff1
value: 26.2104
- type: nauc_map_at_1_max
value: 34.202
- type: nauc_map_at_1_std
value: 33.9772
- type: nauc_map_at_1_diff1
value: 44.6104
- type: nauc_map_at_3_max
value: 39.6785
- type: nauc_map_at_3_std
value: 39.4152
- type: nauc_map_at_3_diff1
value: 37.6022
- type: nauc_map_at_5_max
value: 41.2645
- type: nauc_map_at_5_std
value: 38.6109
- type: nauc_map_at_5_diff1
value: 37.3159
- type: nauc_map_at_10_max
value: 41.9172
- type: nauc_map_at_10_std
value: 40.3848
- type: nauc_map_at_10_diff1
value: 35.2489
- type: nauc_map_at_20_max
value: 42.0995
- type: nauc_map_at_20_std
value: 39.6004
- type: nauc_map_at_20_diff1
value: 35.4418
- type: nauc_map_at_100_max
value: 39.7447
- type: nauc_map_at_100_std
value: 37.819599999999994
- type: nauc_map_at_100_diff1
value: 34.1062
- type: nauc_map_at_1000_max
value: 39.2917
- type: nauc_map_at_1000_std
value: 37.1777
- type: nauc_map_at_1000_diff1
value: 33.6102
- type: nauc_recall_at_1_max
value: 34.202
- type: nauc_recall_at_1_std
value: 33.9772
- type: nauc_recall_at_1_diff1
value: 44.6104
- type: nauc_recall_at_3_max
value: 39.048
- type: nauc_recall_at_3_std
value: 39.7222
- type: nauc_recall_at_3_diff1
value: 33.0168
- type: nauc_recall_at_5_max
value: 42.954100000000004
- type: nauc_recall_at_5_std
value: 39.4149
- type: nauc_recall_at_5_diff1
value: 31.6088
- type: nauc_recall_at_10_max
value: 41.2203
- type: nauc_recall_at_10_std
value: 41.7063
- type: nauc_recall_at_10_diff1
value: 24.0288
- type: nauc_recall_at_20_max
value: 44.0757
- type: nauc_recall_at_20_std
value: 38.6803
- type: nauc_recall_at_20_diff1
value: 29.157899999999998
- type: nauc_recall_at_100_max
value: 24.6526
- type: nauc_recall_at_100_std
value: 24.0066
- type: nauc_recall_at_100_diff1
value: 23.8347
- type: nauc_recall_at_1000_max
value: 22.596
- type: nauc_recall_at_1000_std
value: 21.290799999999997
- type: nauc_recall_at_1000_diff1
value: 21.012700000000002
- type: nauc_precision_at_1_max
value: 41.4733
- type: nauc_precision_at_1_std
value: 34.5204
- type: nauc_precision_at_1_diff1
value: 38.8662
- type: nauc_precision_at_3_max
value: 48.1229
- type: nauc_precision_at_3_std
value: 47.712500000000006
- type: nauc_precision_at_3_diff1
value: 35.7151
- type: nauc_precision_at_5_max
value: 50.8463
- type: nauc_precision_at_5_std
value: 46.9867
- type: nauc_precision_at_5_diff1
value: 33.0426
- type: nauc_precision_at_10_max
value: 50.7306
- type: nauc_precision_at_10_std
value: 49.5174
- type: nauc_precision_at_10_diff1
value: 28.2889
- type: nauc_precision_at_20_max
value: 49.6035
- type: nauc_precision_at_20_std
value: 42.9794
- type: nauc_precision_at_20_diff1
value: 32.3811
- type: nauc_precision_at_100_max
value: 30.7262
- type: nauc_precision_at_100_std
value: 29.2314
- type: nauc_precision_at_100_diff1
value: 25.7678
- type: nauc_precision_at_1000_max
value: 13.3632
- type: nauc_precision_at_1000_std
value: 11.4093
- type: nauc_precision_at_1000_diff1
value: 11.015
- type: nauc_mrr_at_1_max
value: 41.4733
- type: nauc_mrr_at_1_std
value: 34.5204
- type: nauc_mrr_at_1_diff1
value: 38.8662
- type: nauc_mrr_at_3_max
value: 43.217299999999994
- type: nauc_mrr_at_3_std
value: 39.5736
- type: nauc_mrr_at_3_diff1
value: 38.129999999999995
- type: nauc_mrr_at_5_max
value: 44.241
- type: nauc_mrr_at_5_std
value: 40.646100000000004
- type: nauc_mrr_at_5_diff1
value: 36.2331
- type: nauc_mrr_at_10_max
value: 43.6115
- type: nauc_mrr_at_10_std
value: 40.7157
- type: nauc_mrr_at_10_diff1
value: 33.1217
- type: nauc_mrr_at_20_max
value: 43.3382
- type: nauc_mrr_at_20_std
value: 39.4582
- type: nauc_mrr_at_20_diff1
value: 33.6253
- type: nauc_mrr_at_100_max
value: 40.780100000000004
- type: nauc_mrr_at_100_std
value: 37.9242
- type: nauc_mrr_at_100_diff1
value: 32.8418
- type: nauc_mrr_at_1000_max
value: 40.5963
- type: nauc_mrr_at_1000_std
value: 37.5467
- type: nauc_mrr_at_1000_diff1
value: 32.542
- type: main_score
value: 2.394
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 69.84925402371587
- type: cosine_spearman
value: 67.12261377163864
- type: euclidean_pearson
value: 68.77931734192
- type: euclidean_spearman
value: 67.10454107068325
- type: main_score
value: 67.12261377163864
- type: manhattan_pearson
value: 69.39988076793398
- type: manhattan_spearman
value: 67.68708446481159
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 72.71925116055804
- type: cosine_spearman
value: 68.9386835022992
- type: euclidean_pearson
value: 71.00708266525079
- type: euclidean_spearman
value: 69.07087906196487
- type: main_score
value: 68.9386835022992
- type: manhattan_pearson
value: 70.95266060047263
- type: manhattan_spearman
value: 69.11051988196195
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 71.67274455692545
- type: cosine_spearman
value: 68.71669873972587
- type: euclidean_pearson
value: 69.79037485042406
- type: euclidean_spearman
value: 68.80550150752252
- type: main_score
value: 68.71669873972587
- type: manhattan_pearson
value: 69.7571283034187
- type: manhattan_spearman
value: 68.58306466019968
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 54.172888286882504
- type: cosine_spearman
value: 56.04247097489131
- type: euclidean_pearson
value: 57.88587934777827
- type: euclidean_spearman
value: 57.6139294630564
- type: main_score
value: 56.04247097489131
- type: manhattan_pearson
value: 57.616116618991185
- type: manhattan_spearman
value: 57.23150380799801
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 59.58820914531488
- type: cosine_spearman
value: 58.80575077741524
- type: euclidean_pearson
value: 61.1884427988923
- type: euclidean_spearman
value: 60.661625936116124
- type: main_score
value: 58.80575077741524
- type: manhattan_pearson
value: 60.800157410891885
- type: manhattan_spearman
value: 60.29447727072491
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 73.45220638967554
- type: cosine_spearman
value: 73.74453589715445
- type: euclidean_pearson
value: 73.8887071337604
- type: euclidean_spearman
value: 73.51752094057372
- type: main_score
value: 73.74453589715445
- type: manhattan_pearson
value: 73.45961523235827
- type: manhattan_spearman
value: 73.07675481848841
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 66.84132105540075
- type: cosine_spearman
value: 68.24735989887876
- type: euclidean_pearson
value: 68.2712231484699
- type: euclidean_spearman
value: 68.02365271737838
- type: main_score
value: 68.24735989887876
- type: manhattan_pearson
value: 67.87379902773417
- type: manhattan_spearman
value: 67.65342499070456
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 79.2987412566616
- type: cosine_spearman
value: 79.93275889323859
- type: euclidean_pearson
value: 77.90301430319637
- type: euclidean_spearman
value: 79.12169562085792
- type: main_score
value: 79.93275889323859
- type: manhattan_pearson
value: 77.93298637610417
- type: manhattan_spearman
value: 79.38516109229111
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 46.955019830396445
- type: cosine_spearman
value: 52.44226852669887
- type: euclidean_pearson
value: 42.80891863181744
- type: euclidean_spearman
value: 53.175461247693704
- type: main_score
value: 52.44226852669887
- type: manhattan_pearson
value: 42.97005510727849
- type: manhattan_spearman
value: 53.158087426369825
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 66.99025999216197
- type: cosine_spearman
value: 67.56341643518167
- type: euclidean_pearson
value: 69.73441598964332
- type: euclidean_spearman
value: 68.72541136876826
- type: main_score
value: 67.56341643518167
- type: manhattan_pearson
value: 69.43492004000674
- type: manhattan_spearman
value: 68.39614969063062
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 30.13248188083236
- type: cosine_spearman
value: 28.78575545661001
- type: dot_pearson
value: 30.934754821379464
- type: dot_spearman
value: 29.730792596057093
- type: main_score
value: 28.78575545661001
- type: pearson
value: 30.13248188083236
- type: spearman
value: 28.78575545661001
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.66986244175229
name: Pearson Cosine
- type: spearman_cosine
value: 0.675651628513557
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6943200977280434
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6839707658313092
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6973190148612566
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6872926092972673
name: Spearman Euclidean
- type: pearson_dot
value: 0.5534197296097646
name: Pearson Dot
- type: spearman_dot
value: 0.5421965591416092
name: Spearman Dot
- type: pearson_max
value: 0.6973190148612566
name: Pearson Max
- type: spearman_max
value: 0.6872926092972673
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.6628171358537143
name: Pearson Cosine
- type: spearman_cosine
value: 0.670314701212355
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6916567677127377
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6815748132707206
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6948756461188812
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.685329042213794
name: Spearman Euclidean
- type: pearson_dot
value: 0.5229142840207227
name: Pearson Dot
- type: spearman_dot
value: 0.5113740757424073
name: Spearman Dot
- type: pearson_max
value: 0.6948756461188812
name: Pearson Max
- type: spearman_max
value: 0.685329042213794
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.6368313837029833
name: Pearson Cosine
- type: spearman_cosine
value: 0.6512526280069127
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6832129716443456
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.674638334774044
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6843664039671002
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6760040651639672
name: Spearman Euclidean
- type: pearson_dot
value: 0.4266095536126992
name: Pearson Dot
- type: spearman_dot
value: 0.4179376458107888
name: Spearman Dot
- type: pearson_max
value: 0.6843664039671002
name: Pearson Max
- type: spearman_max
value: 0.6760040651639672
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.6147896744901056
name: Pearson Cosine
- type: spearman_cosine
value: 0.6354730852658397
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6730782159165468
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6652649799789521
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.676407799774529
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6691409653459247
name: Spearman Euclidean
- type: pearson_dot
value: 0.35130869784942953
name: Pearson Dot
- type: spearman_dot
value: 0.3445374275232203
name: Spearman Dot
- type: pearson_max
value: 0.676407799774529
name: Pearson Max
- type: spearman_max
value: 0.6691409653459247
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.5789158725954748
name: Pearson Cosine
- type: spearman_cosine
value: 0.6081197115891086
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6578631744829946
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6518503436513217
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6629734628760299
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6570510967281272
name: Spearman Euclidean
- type: pearson_dot
value: 0.24034366392620327
name: Pearson Dot
- type: spearman_dot
value: 0.2331392769925126
name: Spearman Dot
- type: pearson_max
value: 0.6629734628760299
name: Pearson Max
- type: spearman_max
value: 0.6570510967281272
name: Spearman Max
---
# SentenceTransformer based on tomaarsen/mpnet-base-all-nli-triplet
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [tomaarsen/mpnet-base-all-nli-triplet](https://huggingface.co/tomaarsen/mpnet-base-all-nli-triplet) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [tomaarsen/mpnet-base-all-nli-triplet](https://huggingface.co/tomaarsen/mpnet-base-all-nli-triplet) <!-- at revision e88732e5620f3592bf6566604be9a6a5cad814ec -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/mpnet-base-all-nli-triplet-Arabic-mpnet_base")
# Run inference
sentences = [
'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6699 |
| **spearman_cosine** | **0.6757** |
| pearson_manhattan | 0.6943 |
| spearman_manhattan | 0.684 |
| pearson_euclidean | 0.6973 |
| spearman_euclidean | 0.6873 |
| pearson_dot | 0.5534 |
| spearman_dot | 0.5422 |
| pearson_max | 0.6973 |
| spearman_max | 0.6873 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6628 |
| **spearman_cosine** | **0.6703** |
| pearson_manhattan | 0.6917 |
| spearman_manhattan | 0.6816 |
| pearson_euclidean | 0.6949 |
| spearman_euclidean | 0.6853 |
| pearson_dot | 0.5229 |
| spearman_dot | 0.5114 |
| pearson_max | 0.6949 |
| spearman_max | 0.6853 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6368 |
| **spearman_cosine** | **0.6513** |
| pearson_manhattan | 0.6832 |
| spearman_manhattan | 0.6746 |
| pearson_euclidean | 0.6844 |
| spearman_euclidean | 0.676 |
| pearson_dot | 0.4266 |
| spearman_dot | 0.4179 |
| pearson_max | 0.6844 |
| spearman_max | 0.676 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6148 |
| **spearman_cosine** | **0.6355** |
| pearson_manhattan | 0.6731 |
| spearman_manhattan | 0.6653 |
| pearson_euclidean | 0.6764 |
| spearman_euclidean | 0.6691 |
| pearson_dot | 0.3513 |
| spearman_dot | 0.3445 |
| pearson_max | 0.6764 |
| spearman_max | 0.6691 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.5789 |
| **spearman_cosine** | **0.6081** |
| pearson_manhattan | 0.6579 |
| spearman_manhattan | 0.6519 |
| pearson_euclidean | 0.663 |
| spearman_euclidean | 0.6571 |
| pearson_dot | 0.2403 |
| spearman_dot | 0.2331 |
| pearson_max | 0.663 |
| spearman_max | 0.6571 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 23.93 tokens</li><li>max: 155 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 29.62 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 33.95 tokens</li><li>max: 149 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> |
| <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> |
| <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 49.5 tokens</li><li>max: 246 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 23.66 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 25.33 tokens</li><li>max: 82 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> |
| <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> |
| <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:-----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| 0.0229 | 200 | 21.5318 | - | - | - | - | - |
| 0.0459 | 400 | 17.2344 | - | - | - | - | - |
| 0.0688 | 600 | 15.393 | - | - | - | - | - |
| 0.0918 | 800 | 13.7897 | - | - | - | - | - |
| 0.1147 | 1000 | 13.534 | - | - | - | - | - |
| 0.1377 | 1200 | 12.2683 | - | - | - | - | - |
| 0.1606 | 1400 | 10.9271 | - | - | - | - | - |
| 0.1835 | 1600 | 11.071 | - | - | - | - | - |
| 0.2065 | 1800 | 10.0153 | - | - | - | - | - |
| 0.2294 | 2000 | 9.8463 | - | - | - | - | - |
| 0.2524 | 2200 | 10.0194 | - | - | - | - | - |
| 0.2753 | 2400 | 9.8371 | - | - | - | - | - |
| 0.2983 | 2600 | 9.6315 | - | - | - | - | - |
| 0.3212 | 2800 | 8.9858 | - | - | - | - | - |
| 0.3442 | 3000 | 9.1876 | - | - | - | - | - |
| 0.3671 | 3200 | 8.8028 | - | - | - | - | - |
| 0.3900 | 3400 | 8.6075 | - | - | - | - | - |
| 0.4130 | 3600 | 8.4285 | - | - | - | - | - |
| 0.4359 | 3800 | 8.1258 | - | - | - | - | - |
| 0.4589 | 4000 | 8.2508 | - | - | - | - | - |
| 0.4818 | 4200 | 7.8037 | - | - | - | - | - |
| 0.5048 | 4400 | 7.7133 | - | - | - | - | - |
| 0.5277 | 4600 | 7.5006 | - | - | - | - | - |
| 0.5506 | 4800 | 7.7025 | - | - | - | - | - |
| 0.5736 | 5000 | 7.7593 | - | - | - | - | - |
| 0.5965 | 5200 | 7.6305 | - | - | - | - | - |
| 0.6195 | 5400 | 7.7502 | - | - | - | - | - |
| 0.6424 | 5600 | 7.5624 | - | - | - | - | - |
| 0.6654 | 5800 | 7.5287 | - | - | - | - | - |
| 0.6883 | 6000 | 7.4261 | - | - | - | - | - |
| 0.7113 | 6200 | 7.239 | - | - | - | - | - |
| 0.7342 | 6400 | 7.1631 | - | - | - | - | - |
| 0.7571 | 6600 | 7.6865 | - | - | - | - | - |
| 0.7801 | 6800 | 7.6124 | - | - | - | - | - |
| 0.8030 | 7000 | 6.9936 | - | - | - | - | - |
| 0.8260 | 7200 | 6.7331 | - | - | - | - | - |
| 0.8489 | 7400 | 6.4542 | - | - | - | - | - |
| 0.8719 | 7600 | 6.1994 | - | - | - | - | - |
| 0.8948 | 7800 | 5.9798 | - | - | - | - | - |
| 0.9177 | 8000 | 5.7808 | - | - | - | - | - |
| 0.9407 | 8200 | 5.6952 | - | - | - | - | - |
| 0.9636 | 8400 | 5.5082 | - | - | - | - | - |
| 0.9866 | 8600 | 5.4421 | - | - | - | - | - |
| 1.0095 | 8800 | 3.0309 | - | - | - | - | - |
| 1.0026 | 9000 | 1.1835 | - | - | - | - | - |
| 1.0256 | 9200 | 8.1196 | - | - | - | - | - |
| 1.0485 | 9400 | 8.0326 | - | - | - | - | - |
| 1.0715 | 9600 | 8.5028 | - | - | - | - | - |
| 1.0944 | 9800 | 7.6923 | - | - | - | - | - |
| 1.1174 | 10000 | 8.029 | - | - | - | - | - |
| 1.1403 | 10200 | 7.5052 | - | - | - | - | - |
| 1.1632 | 10400 | 7.1177 | - | - | - | - | - |
| 1.1862 | 10600 | 6.9594 | - | - | - | - | - |
| 1.2091 | 10800 | 6.6662 | - | - | - | - | - |
| 1.2321 | 11000 | 6.6903 | - | - | - | - | - |
| 1.2550 | 11200 | 6.9523 | - | - | - | - | - |
| 1.2780 | 11400 | 6.676 | - | - | - | - | - |
| 1.3009 | 11600 | 6.7141 | - | - | - | - | - |
| 1.3238 | 11800 | 6.568 | - | - | - | - | - |
| 1.3468 | 12000 | 6.8938 | - | - | - | - | - |
| 1.3697 | 12200 | 6.3745 | - | - | - | - | - |
| 1.3927 | 12400 | 6.2513 | - | - | - | - | - |
| 1.4156 | 12600 | 6.2589 | - | - | - | - | - |
| 1.4386 | 12800 | 6.1388 | - | - | - | - | - |
| 1.4615 | 13000 | 6.1835 | - | - | - | - | - |
| 1.4845 | 13200 | 5.9004 | - | - | - | - | - |
| 1.5074 | 13400 | 5.7891 | - | - | - | - | - |
| 1.5303 | 13600 | 5.6184 | - | - | - | - | - |
| 1.5533 | 13800 | 5.9762 | - | - | - | - | - |
| 1.5762 | 14000 | 5.9737 | - | - | - | - | - |
| 1.5992 | 14200 | 5.8563 | - | - | - | - | - |
| 1.6221 | 14400 | 5.8904 | - | - | - | - | - |
| 1.6451 | 14600 | 5.8484 | - | - | - | - | - |
| 1.6680 | 14800 | 5.8906 | - | - | - | - | - |
| 1.6909 | 15000 | 5.7613 | - | - | - | - | - |
| 1.7139 | 15200 | 5.5744 | - | - | - | - | - |
| 1.7368 | 15400 | 5.6569 | - | - | - | - | - |
| 1.7598 | 15600 | 5.7439 | - | - | - | - | - |
| 1.7827 | 15800 | 5.5593 | - | - | - | - | - |
| 1.8057 | 16000 | 5.2935 | - | - | - | - | - |
| 1.8286 | 16200 | 5.088 | - | - | - | - | - |
| 1.8516 | 16400 | 5.0167 | - | - | - | - | - |
| 1.8745 | 16600 | 4.84 | - | - | - | - | - |
| 1.8974 | 16800 | 4.6731 | - | - | - | - | - |
| 1.9204 | 17000 | 4.6404 | - | - | - | - | - |
| 1.9433 | 17200 | 4.6413 | - | - | - | - | - |
| 1.9663 | 17400 | 4.4495 | - | - | - | - | - |
| 1.9892 | 17600 | 4.4262 | - | - | - | - | - |
| 2.0122 | 17800 | 2.01 | - | - | - | - | - |
| 2.0053 | 18000 | 1.8418 | - | - | - | - | - |
| 2.0282 | 18200 | 6.2714 | - | - | - | - | - |
| 2.0512 | 18400 | 6.1742 | - | - | - | - | - |
| 2.0741 | 18600 | 6.5996 | - | - | - | - | - |
| 2.0971 | 18800 | 6.0907 | - | - | - | - | - |
| 2.1200 | 19000 | 6.2418 | - | - | - | - | - |
| 2.1429 | 19200 | 5.7817 | - | - | - | - | - |
| 2.1659 | 19400 | 5.7073 | - | - | - | - | - |
| 2.1888 | 19600 | 5.2645 | - | - | - | - | - |
| 2.2118 | 19800 | 5.3451 | - | - | - | - | - |
| 2.2347 | 20000 | 5.2453 | - | - | - | - | - |
| 2.2577 | 20200 | 5.6161 | - | - | - | - | - |
| 2.2806 | 20400 | 5.2289 | - | - | - | - | - |
| 2.3035 | 20600 | 5.3888 | - | - | - | - | - |
| 2.3265 | 20800 | 5.2483 | - | - | - | - | - |
| 2.3494 | 21000 | 5.5791 | - | - | - | - | - |
| 2.3724 | 21200 | 5.1643 | - | - | - | - | - |
| 2.3953 | 21400 | 5.1231 | - | - | - | - | - |
| 2.4183 | 21600 | 5.1055 | - | - | - | - | - |
| 2.4412 | 21800 | 5.1778 | - | - | - | - | - |
| 2.4642 | 22000 | 5.0466 | - | - | - | - | - |
| 2.4871 | 22200 | 4.8321 | - | - | - | - | - |
| 2.5100 | 22400 | 4.7056 | - | - | - | - | - |
| 2.5330 | 22600 | 4.6858 | - | - | - | - | - |
| 2.5559 | 22800 | 4.9189 | - | - | - | - | - |
| 2.5789 | 23000 | 4.912 | - | - | - | - | - |
| 2.6018 | 23200 | 4.8289 | - | - | - | - | - |
| 2.6248 | 23400 | 4.8959 | - | - | - | - | - |
| 2.6477 | 23600 | 4.9441 | - | - | - | - | - |
| 2.6706 | 23800 | 4.9334 | - | - | - | - | - |
| 2.6936 | 24000 | 4.8328 | - | - | - | - | - |
| 2.7165 | 24200 | 4.601 | - | - | - | - | - |
| 2.7395 | 24400 | 4.834 | - | - | - | - | - |
| 2.7624 | 24600 | 5.152 | - | - | - | - | - |
| 2.7854 | 24800 | 4.9232 | - | - | - | - | - |
| 2.8083 | 25000 | 4.6556 | - | - | - | - | - |
| 2.8312 | 25200 | 4.6229 | - | - | - | - | - |
| 2.8542 | 25400 | 4.5768 | - | - | - | - | - |
| 2.8771 | 25600 | 4.3619 | - | - | - | - | - |
| 2.9001 | 25800 | 4.3608 | - | - | - | - | - |
| 2.9230 | 26000 | 4.2834 | - | - | - | - | - |
| 2.9403 | 26151 | - | 0.6355 | 0.6513 | 0.6703 | 0.6081 | 0.6757 |
</details>
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## <span style="color:blue">Acknowledgments</span>
The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.
```markdown
## Citation
If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:
@misc{nacar2024enhancingsemanticsimilarityunderstanding,
title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning},
author={Omer Nacar and Anis Koubaa},
year={2024},
eprint={2407.21139},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.21139},
} | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BIOSSES"
] | Non_BioNLP |
croissantllm/base_25k | croissantllm | text2text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"code",
"text-generation-inference",
"art",
"text2text-generation",
"fr",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:uonlp/CulturaX",
"dataset:pg19",
"dataset:bigcode/starcoderdata",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,705 | 1,706 | 5 | 0 | ---
datasets:
- cerebras/SlimPajama-627B
- uonlp/CulturaX
- pg19
- bigcode/starcoderdata
language:
- fr
- en
license: mit
pipeline_tag: text2text-generation
tags:
- legal
- code
- text-generation-inference
- art
---
# CroissantLLM - Base (25k steps)
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 25k steps (0.39 T) tokens.
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
## Abstract
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
## Citation
Our work can be cited as:
```bash
Coming soon
```
## Usage
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "croissantllm/base_25k"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.
He is heading to the market. -> Il va au marché.
We are running on the beach. ->", return_tensors="pt").to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
print(tokenizer.decode(tokens[0]))
# remove bos token
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
print(tokenizer.decode(tokens[0]))
```
| [
"TRANSLATION"
] | [
"CRAFT"
] | Non_BioNLP |
mlx-community/multilingual-e5-small-mlx | mlx-community | sentence-similarity | [
"sentence-transformers",
"bert",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"mlx",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,704 | 1,704 | 11 | 0 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
- mlx
model-index:
- name: multilingual-e5-small
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.79104477611939
- type: ap
value: 36.9996434842022
- type: f1
value: 67.95453679103099
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.64882226980728
- type: ap
value: 82.11942130026586
- type: f1
value: 69.87963421606715
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.8095952023988
- type: ap
value: 24.46869495579561
- type: f1
value: 63.00108480037597
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 64.186295503212
- type: ap
value: 15.496804690197042
- type: f1
value: 52.07153895475031
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 88.699325
- type: ap
value: 85.27039559917269
- type: f1
value: 88.65556295032513
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.69799999999999
- type: f1
value: 43.73187348654165
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.245999999999995
- type: f1
value: 39.3863530637684
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.394
- type: f1
value: 39.301223469483446
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.864
- type: f1
value: 37.97974261868003
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.682
- type: f1
value: 37.07399369768313
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.504
- type: f1
value: 36.62317273874278
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.061
- type: map_at_10
value: 31.703
- type: map_at_100
value: 32.967
- type: map_at_1000
value: 33.001000000000005
- type: map_at_3
value: 27.466
- type: map_at_5
value: 29.564
- type: mrr_at_1
value: 19.559
- type: mrr_at_10
value: 31.874999999999996
- type: mrr_at_100
value: 33.146
- type: mrr_at_1000
value: 33.18
- type: mrr_at_3
value: 27.667
- type: mrr_at_5
value: 29.74
- type: ndcg_at_1
value: 19.061
- type: ndcg_at_10
value: 39.062999999999995
- type: ndcg_at_100
value: 45.184000000000005
- type: ndcg_at_1000
value: 46.115
- type: ndcg_at_3
value: 30.203000000000003
- type: ndcg_at_5
value: 33.953
- type: precision_at_1
value: 19.061
- type: precision_at_10
value: 6.279999999999999
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 12.706999999999999
- type: precision_at_5
value: 9.431000000000001
- type: recall_at_1
value: 19.061
- type: recall_at_10
value: 62.802
- type: recall_at_100
value: 91.323
- type: recall_at_1000
value: 98.72
- type: recall_at_3
value: 38.122
- type: recall_at_5
value: 47.155
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 39.22266660528253
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 30.79980849482483
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 57.8790068352054
- type: mrr
value: 71.78791276436706
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.36328364043163
- type: cos_sim_spearman
value: 82.26211536195868
- type: euclidean_pearson
value: 80.3183865039173
- type: euclidean_spearman
value: 79.88495276296132
- type: manhattan_pearson
value: 80.14484480692127
- type: manhattan_spearman
value: 80.39279565980743
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.0375782881002
- type: f1
value: 97.86012526096033
- type: precision
value: 97.77139874739039
- type: recall
value: 98.0375782881002
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 93.35241030156286
- type: f1
value: 92.66050333846944
- type: precision
value: 92.3306919069631
- type: recall
value: 93.35241030156286
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 94.0699688257707
- type: f1
value: 93.50236693222492
- type: precision
value: 93.22791825424315
- type: recall
value: 94.0699688257707
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 89.25750394944708
- type: f1
value: 88.79234684921889
- type: precision
value: 88.57293312269616
- type: recall
value: 89.25750394944708
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 79.41558441558442
- type: f1
value: 79.25886487487219
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.747820820329736
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 27.045143830596146
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.252999999999997
- type: map_at_10
value: 31.655916666666666
- type: map_at_100
value: 32.680749999999996
- type: map_at_1000
value: 32.79483333333334
- type: map_at_3
value: 29.43691666666666
- type: map_at_5
value: 30.717416666666665
- type: mrr_at_1
value: 28.602750000000004
- type: mrr_at_10
value: 35.56875
- type: mrr_at_100
value: 36.3595
- type: mrr_at_1000
value: 36.427749999999996
- type: mrr_at_3
value: 33.586166666666664
- type: mrr_at_5
value: 34.73641666666666
- type: ndcg_at_1
value: 28.602750000000004
- type: ndcg_at_10
value: 36.06933333333334
- type: ndcg_at_100
value: 40.70141666666667
- type: ndcg_at_1000
value: 43.24341666666667
- type: ndcg_at_3
value: 32.307916666666664
- type: ndcg_at_5
value: 34.129999999999995
- type: precision_at_1
value: 28.602750000000004
- type: precision_at_10
value: 6.097666666666667
- type: precision_at_100
value: 0.9809166666666668
- type: precision_at_1000
value: 0.13766666666666663
- type: precision_at_3
value: 14.628166666666667
- type: precision_at_5
value: 10.266916666666667
- type: recall_at_1
value: 24.252999999999997
- type: recall_at_10
value: 45.31916666666667
- type: recall_at_100
value: 66.03575000000001
- type: recall_at_1000
value: 83.94708333333334
- type: recall_at_3
value: 34.71941666666666
- type: recall_at_5
value: 39.46358333333333
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.024000000000001
- type: map_at_10
value: 15.644
- type: map_at_100
value: 17.154
- type: map_at_1000
value: 17.345
- type: map_at_3
value: 13.028
- type: map_at_5
value: 14.251
- type: mrr_at_1
value: 19.674
- type: mrr_at_10
value: 29.826999999999998
- type: mrr_at_100
value: 30.935000000000002
- type: mrr_at_1000
value: 30.987
- type: mrr_at_3
value: 26.645000000000003
- type: mrr_at_5
value: 28.29
- type: ndcg_at_1
value: 19.674
- type: ndcg_at_10
value: 22.545
- type: ndcg_at_100
value: 29.207
- type: ndcg_at_1000
value: 32.912
- type: ndcg_at_3
value: 17.952
- type: ndcg_at_5
value: 19.363
- type: precision_at_1
value: 19.674
- type: precision_at_10
value: 7.212000000000001
- type: precision_at_100
value: 1.435
- type: precision_at_1000
value: 0.212
- type: precision_at_3
value: 13.507
- type: precision_at_5
value: 10.397
- type: recall_at_1
value: 9.024000000000001
- type: recall_at_10
value: 28.077999999999996
- type: recall_at_100
value: 51.403
- type: recall_at_1000
value: 72.406
- type: recall_at_3
value: 16.768
- type: recall_at_5
value: 20.737
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.012
- type: map_at_10
value: 17.138
- type: map_at_100
value: 24.146
- type: map_at_1000
value: 25.622
- type: map_at_3
value: 12.552
- type: map_at_5
value: 14.435
- type: mrr_at_1
value: 62.25000000000001
- type: mrr_at_10
value: 71.186
- type: mrr_at_100
value: 71.504
- type: mrr_at_1000
value: 71.514
- type: mrr_at_3
value: 69.333
- type: mrr_at_5
value: 70.408
- type: ndcg_at_1
value: 49.75
- type: ndcg_at_10
value: 37.76
- type: ndcg_at_100
value: 42.071
- type: ndcg_at_1000
value: 49.309
- type: ndcg_at_3
value: 41.644
- type: ndcg_at_5
value: 39.812999999999995
- type: precision_at_1
value: 62.25000000000001
- type: precision_at_10
value: 30.15
- type: precision_at_100
value: 9.753
- type: precision_at_1000
value: 1.9189999999999998
- type: precision_at_3
value: 45.667
- type: precision_at_5
value: 39.15
- type: recall_at_1
value: 8.012
- type: recall_at_10
value: 22.599
- type: recall_at_100
value: 48.068
- type: recall_at_1000
value: 71.328
- type: recall_at_3
value: 14.043
- type: recall_at_5
value: 17.124
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 42.455
- type: f1
value: 37.59462649781862
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.092
- type: map_at_10
value: 69.586
- type: map_at_100
value: 69.968
- type: map_at_1000
value: 69.982
- type: map_at_3
value: 67.48100000000001
- type: map_at_5
value: 68.915
- type: mrr_at_1
value: 62.166
- type: mrr_at_10
value: 73.588
- type: mrr_at_100
value: 73.86399999999999
- type: mrr_at_1000
value: 73.868
- type: mrr_at_3
value: 71.6
- type: mrr_at_5
value: 72.99
- type: ndcg_at_1
value: 62.166
- type: ndcg_at_10
value: 75.27199999999999
- type: ndcg_at_100
value: 76.816
- type: ndcg_at_1000
value: 77.09700000000001
- type: ndcg_at_3
value: 71.36
- type: ndcg_at_5
value: 73.785
- type: precision_at_1
value: 62.166
- type: precision_at_10
value: 9.716
- type: precision_at_100
value: 1.065
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 28.278
- type: precision_at_5
value: 18.343999999999998
- type: recall_at_1
value: 58.092
- type: recall_at_10
value: 88.73400000000001
- type: recall_at_100
value: 95.195
- type: recall_at_1000
value: 97.04599999999999
- type: recall_at_3
value: 78.45
- type: recall_at_5
value: 84.316
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.649
- type: map_at_10
value: 26.457000000000004
- type: map_at_100
value: 28.169
- type: map_at_1000
value: 28.352
- type: map_at_3
value: 23.305
- type: map_at_5
value: 25.169000000000004
- type: mrr_at_1
value: 32.407000000000004
- type: mrr_at_10
value: 40.922
- type: mrr_at_100
value: 41.931000000000004
- type: mrr_at_1000
value: 41.983
- type: mrr_at_3
value: 38.786
- type: mrr_at_5
value: 40.205999999999996
- type: ndcg_at_1
value: 32.407000000000004
- type: ndcg_at_10
value: 33.314
- type: ndcg_at_100
value: 40.312
- type: ndcg_at_1000
value: 43.685
- type: ndcg_at_3
value: 30.391000000000002
- type: ndcg_at_5
value: 31.525
- type: precision_at_1
value: 32.407000000000004
- type: precision_at_10
value: 8.966000000000001
- type: precision_at_100
value: 1.6019999999999999
- type: precision_at_1000
value: 0.22200000000000003
- type: precision_at_3
value: 20.165
- type: precision_at_5
value: 14.722
- type: recall_at_1
value: 16.649
- type: recall_at_10
value: 39.117000000000004
- type: recall_at_100
value: 65.726
- type: recall_at_1000
value: 85.784
- type: recall_at_3
value: 27.914
- type: recall_at_5
value: 33.289
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.253
- type: map_at_10
value: 56.16799999999999
- type: map_at_100
value: 57.06099999999999
- type: map_at_1000
value: 57.126
- type: map_at_3
value: 52.644999999999996
- type: map_at_5
value: 54.909
- type: mrr_at_1
value: 72.505
- type: mrr_at_10
value: 79.66
- type: mrr_at_100
value: 79.869
- type: mrr_at_1000
value: 79.88
- type: mrr_at_3
value: 78.411
- type: mrr_at_5
value: 79.19800000000001
- type: ndcg_at_1
value: 72.505
- type: ndcg_at_10
value: 65.094
- type: ndcg_at_100
value: 68.219
- type: ndcg_at_1000
value: 69.515
- type: ndcg_at_3
value: 59.99
- type: ndcg_at_5
value: 62.909000000000006
- type: precision_at_1
value: 72.505
- type: precision_at_10
value: 13.749
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 38.357
- type: precision_at_5
value: 25.313000000000002
- type: recall_at_1
value: 36.253
- type: recall_at_10
value: 68.744
- type: recall_at_100
value: 80.925
- type: recall_at_1000
value: 89.534
- type: recall_at_3
value: 57.535000000000004
- type: recall_at_5
value: 63.282000000000004
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 80.82239999999999
- type: ap
value: 75.65895781725314
- type: f1
value: 80.75880969095746
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.624
- type: map_at_10
value: 34.075
- type: map_at_100
value: 35.229
- type: map_at_1000
value: 35.276999999999994
- type: map_at_3
value: 30.245
- type: map_at_5
value: 32.42
- type: mrr_at_1
value: 22.264
- type: mrr_at_10
value: 34.638000000000005
- type: mrr_at_100
value: 35.744
- type: mrr_at_1000
value: 35.787
- type: mrr_at_3
value: 30.891000000000002
- type: mrr_at_5
value: 33.042
- type: ndcg_at_1
value: 22.264
- type: ndcg_at_10
value: 40.991
- type: ndcg_at_100
value: 46.563
- type: ndcg_at_1000
value: 47.743
- type: ndcg_at_3
value: 33.198
- type: ndcg_at_5
value: 37.069
- type: precision_at_1
value: 22.264
- type: precision_at_10
value: 6.5089999999999995
- type: precision_at_100
value: 0.9299999999999999
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 14.216999999999999
- type: precision_at_5
value: 10.487
- type: recall_at_1
value: 21.624
- type: recall_at_10
value: 62.303
- type: recall_at_100
value: 88.124
- type: recall_at_1000
value: 97.08
- type: recall_at_3
value: 41.099999999999994
- type: recall_at_5
value: 50.381
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.06703146374831
- type: f1
value: 90.86867815863172
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.46970977740209
- type: f1
value: 86.36832872036588
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.26951300867245
- type: f1
value: 88.93561193959502
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 84.22799874725963
- type: f1
value: 84.30490069236556
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.02007888131948
- type: f1
value: 85.39376041027991
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 85.34900542495481
- type: f1
value: 85.39859673336713
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.078431372549
- type: f1
value: 53.45071102002276
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 65.85798816568047
- type: f1
value: 46.53112748993529
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.96864576384256
- type: f1
value: 45.966703022829506
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 61.31537738803633
- type: f1
value: 45.52601712835461
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 66.29616349946218
- type: f1
value: 47.24166485726613
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.51537070524412
- type: f1
value: 49.463476319014276
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.06792199058508
- type: f1
value: 54.094921857502285
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.960322797579025
- type: f1
value: 48.547371223370945
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.425016812373904
- type: f1
value: 50.47069202054312
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.798251513113655
- type: f1
value: 57.05013069086648
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.37794216543376
- type: f1
value: 56.3607992649805
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 46.56018829858777
- type: f1
value: 43.87319715715134
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.9724277067922
- type: f1
value: 59.36480066245562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.72696704774715
- type: f1
value: 59.143595966615855
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.5971755211836
- type: f1
value: 59.169445724946726
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.29589778076665
- type: f1
value: 67.7577001808977
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.31136516476126
- type: f1
value: 64.52032955983242
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.54472091459314
- type: f1
value: 61.47903120066317
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.45595158036314
- type: f1
value: 58.0891846024637
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.47074646940149
- type: f1
value: 62.84830858877575
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.046402151983855
- type: f1
value: 55.269074430533195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.06523201075991
- type: f1
value: 61.35339643021369
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.954942837928726
- type: f1
value: 57.07035922704846
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.404169468728995
- type: f1
value: 53.94259011839138
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.16610625420309
- type: f1
value: 61.337103431499365
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 52.262945527908535
- type: f1
value: 49.7610691598921
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.54472091459314
- type: f1
value: 63.469099018440154
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.22797579018157
- type: f1
value: 64.89098471083001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.847343644922674
- type: f1
value: 47.8536963168393
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.45326160053799
- type: f1
value: 46.370078045805556
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 42.83120376597175
- type: f1
value: 39.68948521599982
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.5084061869536
- type: f1
value: 53.961876160401545
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.7895090786819
- type: f1
value: 61.134223684676
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.98991257565569
- type: f1
value: 52.579862862826296
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.90316072629456
- type: f1
value: 58.203024538290336
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.09818426361802
- type: f1
value: 54.22718458445455
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.991257565568255
- type: f1
value: 55.84892781767421
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.901143241425686
- type: f1
value: 52.25264332199797
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.96368527236047
- type: f1
value: 58.927243876153454
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.64223268325489
- type: f1
value: 62.340453718379706
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.52589105581708
- type: f1
value: 61.661113187022174
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.84599865501009
- type: f1
value: 64.59342572873005
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.81035642232684
- type: f1
value: 57.5169089806797
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.75991930060525
- type: f1
value: 62.89531115787938
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.51647612642906
- type: f1
value: 54.33154780100043
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.985877605917956
- type: f1
value: 54.46187524463802
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.03026227303296
- type: f1
value: 62.34377392877748
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.567585743106925
- type: f1
value: 50.73770655983206
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.2595830531271
- type: f1
value: 53.657327291708626
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.82784129119032
- type: f1
value: 54.82518072665301
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.06859448554137
- type: f1
value: 63.00185280500495
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.91055817081371
- type: f1
value: 55.54116301224262
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.54404841963686
- type: f1
value: 59.57650946030184
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.27706792199059
- type: f1
value: 56.50010066083435
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.0719569603228
- type: f1
value: 61.817075925647956
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.23806321452591
- type: f1
value: 65.24917026029749
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.53530598520511
- type: f1
value: 61.71131132295768
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.04303967720243
- type: f1
value: 60.3950085685985
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.83591123066578
- type: f1
value: 54.95059828830849
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.62340282447881
- type: f1
value: 59.525159996498225
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.85406859448555
- type: f1
value: 59.129299095681276
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.76731674512441
- type: f1
value: 61.159560612627715
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.181573638197705
- type: f1
value: 46.98422176289957
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.92737054472092
- type: f1
value: 67.69135611952979
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.18964357767318
- type: f1
value: 68.46106138186214
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.0712844653665
- type: f1
value: 66.75545422473901
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4754539340955
- type: f1
value: 74.38427146553252
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.82515131136518
- type: f1
value: 69.63516462173847
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.70880968392737
- type: f1
value: 67.45420662567926
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.95494283792871
- type: f1
value: 65.06191009049222
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.75924680564896
- type: f1
value: 68.30833379585945
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.806321452589096
- type: f1
value: 63.273048243765054
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.68997982515133
- type: f1
value: 66.54703855381324
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.46940147948891
- type: f1
value: 65.91017343463396
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.49899125756556
- type: f1
value: 57.90333469917769
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.9219905850706
- type: f1
value: 67.23169403762938
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.486213853396094
- type: f1
value: 54.85282355583758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.04169468728985
- type: f1
value: 68.83833333320462
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.88702084734365
- type: f1
value: 74.04474735232299
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.63416274377943
- type: f1
value: 55.11332211687954
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.23604572965702
- type: f1
value: 50.86529813991055
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.62407531943511
- type: f1
value: 43.63485467164535
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.15601882985878
- type: f1
value: 57.522837510959924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.84532616005382
- type: f1
value: 69.60021127179697
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.65770006724949
- type: f1
value: 55.84219135523227
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.53665097511768
- type: f1
value: 65.09087787792639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.31405514458642
- type: f1
value: 58.06135303831491
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.88231338264964
- type: f1
value: 62.751099407787926
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.86012104909213
- type: f1
value: 56.29118323058282
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.37390719569602
- type: f1
value: 66.27922244885102
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.8675184936113
- type: f1
value: 70.22146529932019
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.2212508406187
- type: f1
value: 67.77454802056282
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.18090114324143
- type: f1
value: 68.03737625431621
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.65030262273034
- type: f1
value: 63.792945486912856
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.48217888365838
- type: f1
value: 69.96028997292197
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.17821116341627
- type: f1
value: 59.3935969827171
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.86146603900471
- type: f1
value: 60.133692735032376
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.89441829186282
- type: f1
value: 70.03064076194089
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.15063887020847
- type: f1
value: 56.23326278499678
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.43846671149966
- type: f1
value: 57.70440450281974
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.8507061197041
- type: f1
value: 59.22916396061171
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.65568258238063
- type: f1
value: 69.90736239440633
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.8843308675185
- type: f1
value: 59.30332663713599
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.05312710154674
- type: f1
value: 67.44024062594775
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.111634162743776
- type: f1
value: 60.89083013084519
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.44115669132482
- type: f1
value: 67.92227541674552
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4687289845326
- type: f1
value: 74.16376793486025
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.31876260928043
- type: f1
value: 68.5246745215607
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.90431696479766
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.259158476693774
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.28445330838555
- type: mrr
value: 31.15758529581164
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.353
- type: map_at_10
value: 11.565
- type: map_at_100
value: 14.097000000000001
- type: map_at_1000
value: 15.354999999999999
- type: map_at_3
value: 8.749
- type: map_at_5
value: 9.974
- type: mrr_at_1
value: 42.105
- type: mrr_at_10
value: 50.589
- type: mrr_at_100
value: 51.187000000000005
- type: mrr_at_1000
value: 51.233
- type: mrr_at_3
value: 48.246
- type: mrr_at_5
value: 49.546
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 31.009999999999998
- type: ndcg_at_100
value: 28.026
- type: ndcg_at_1000
value: 36.905
- type: ndcg_at_3
value: 35.983
- type: ndcg_at_5
value: 33.764
- type: precision_at_1
value: 42.105
- type: precision_at_10
value: 22.786
- type: precision_at_100
value: 6.916
- type: precision_at_1000
value: 1.981
- type: precision_at_3
value: 33.333
- type: precision_at_5
value: 28.731
- type: recall_at_1
value: 5.353
- type: recall_at_10
value: 15.039
- type: recall_at_100
value: 27.348
- type: recall_at_1000
value: 59.453
- type: recall_at_3
value: 9.792
- type: recall_at_5
value: 11.882
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.852
- type: map_at_10
value: 48.924
- type: map_at_100
value: 49.854
- type: map_at_1000
value: 49.886
- type: map_at_3
value: 44.9
- type: map_at_5
value: 47.387
- type: mrr_at_1
value: 38.035999999999994
- type: mrr_at_10
value: 51.644
- type: mrr_at_100
value: 52.339
- type: mrr_at_1000
value: 52.35999999999999
- type: mrr_at_3
value: 48.421
- type: mrr_at_5
value: 50.468999999999994
- type: ndcg_at_1
value: 38.007000000000005
- type: ndcg_at_10
value: 56.293000000000006
- type: ndcg_at_100
value: 60.167
- type: ndcg_at_1000
value: 60.916000000000004
- type: ndcg_at_3
value: 48.903999999999996
- type: ndcg_at_5
value: 52.978
- type: precision_at_1
value: 38.007000000000005
- type: precision_at_10
value: 9.041
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 22.084
- type: precision_at_5
value: 15.608
- type: recall_at_1
value: 33.852
- type: recall_at_10
value: 75.893
- type: recall_at_100
value: 92.589
- type: recall_at_1000
value: 98.153
- type: recall_at_3
value: 56.969
- type: recall_at_5
value: 66.283
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.174
- type: map_at_10
value: 82.891
- type: map_at_100
value: 83.545
- type: map_at_1000
value: 83.56700000000001
- type: map_at_3
value: 79.944
- type: map_at_5
value: 81.812
- type: mrr_at_1
value: 79.67999999999999
- type: mrr_at_10
value: 86.279
- type: mrr_at_100
value: 86.39
- type: mrr_at_1000
value: 86.392
- type: mrr_at_3
value: 85.21
- type: mrr_at_5
value: 85.92999999999999
- type: ndcg_at_1
value: 79.69000000000001
- type: ndcg_at_10
value: 86.929
- type: ndcg_at_100
value: 88.266
- type: ndcg_at_1000
value: 88.428
- type: ndcg_at_3
value: 83.899
- type: ndcg_at_5
value: 85.56700000000001
- type: precision_at_1
value: 79.69000000000001
- type: precision_at_10
value: 13.161000000000001
- type: precision_at_100
value: 1.513
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.603
- type: precision_at_5
value: 24.138
- type: recall_at_1
value: 69.174
- type: recall_at_10
value: 94.529
- type: recall_at_100
value: 99.15
- type: recall_at_1000
value: 99.925
- type: recall_at_3
value: 85.86200000000001
- type: recall_at_5
value: 90.501
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 39.13064340585255
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 58.97884249325877
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.4680000000000004
- type: map_at_10
value: 7.865
- type: map_at_100
value: 9.332
- type: map_at_1000
value: 9.587
- type: map_at_3
value: 5.800000000000001
- type: map_at_5
value: 6.8790000000000004
- type: mrr_at_1
value: 17.0
- type: mrr_at_10
value: 25.629
- type: mrr_at_100
value: 26.806
- type: mrr_at_1000
value: 26.889000000000003
- type: mrr_at_3
value: 22.8
- type: mrr_at_5
value: 24.26
- type: ndcg_at_1
value: 17.0
- type: ndcg_at_10
value: 13.895
- type: ndcg_at_100
value: 20.491999999999997
- type: ndcg_at_1000
value: 25.759999999999998
- type: ndcg_at_3
value: 13.347999999999999
- type: ndcg_at_5
value: 11.61
- type: precision_at_1
value: 17.0
- type: precision_at_10
value: 7.090000000000001
- type: precision_at_100
value: 1.669
- type: precision_at_1000
value: 0.294
- type: precision_at_3
value: 12.3
- type: precision_at_5
value: 10.02
- type: recall_at_1
value: 3.4680000000000004
- type: recall_at_10
value: 14.363000000000001
- type: recall_at_100
value: 33.875
- type: recall_at_1000
value: 59.711999999999996
- type: recall_at_3
value: 7.483
- type: recall_at_5
value: 10.173
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.04084311714061
- type: cos_sim_spearman
value: 77.51342467443078
- type: euclidean_pearson
value: 80.0321166028479
- type: euclidean_spearman
value: 77.29249114733226
- type: manhattan_pearson
value: 80.03105964262431
- type: manhattan_spearman
value: 77.22373689514794
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.1680158034387
- type: cos_sim_spearman
value: 76.55983344071117
- type: euclidean_pearson
value: 79.75266678300143
- type: euclidean_spearman
value: 75.34516823467025
- type: manhattan_pearson
value: 79.75959151517357
- type: manhattan_spearman
value: 75.42330344141912
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 76.48898993209346
- type: cos_sim_spearman
value: 76.96954120323366
- type: euclidean_pearson
value: 76.94139109279668
- type: euclidean_spearman
value: 76.85860283201711
- type: manhattan_pearson
value: 76.6944095091912
- type: manhattan_spearman
value: 76.61096912972553
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 77.85082366246944
- type: cos_sim_spearman
value: 75.52053350101731
- type: euclidean_pearson
value: 77.1165845070926
- type: euclidean_spearman
value: 75.31216065884388
- type: manhattan_pearson
value: 77.06193941833494
- type: manhattan_spearman
value: 75.31003701700112
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.36305246526497
- type: cos_sim_spearman
value: 87.11704613927415
- type: euclidean_pearson
value: 86.04199125810939
- type: euclidean_spearman
value: 86.51117572414263
- type: manhattan_pearson
value: 86.0805106816633
- type: manhattan_spearman
value: 86.52798366512229
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.18536255599724
- type: cos_sim_spearman
value: 83.63377151025418
- type: euclidean_pearson
value: 83.24657467993141
- type: euclidean_spearman
value: 84.02751481993825
- type: manhattan_pearson
value: 83.11941806582371
- type: manhattan_spearman
value: 83.84251281019304
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 78.95816528475514
- type: cos_sim_spearman
value: 78.86607380120462
- type: euclidean_pearson
value: 78.51268699230545
- type: euclidean_spearman
value: 79.11649316502229
- type: manhattan_pearson
value: 78.32367302808157
- type: manhattan_spearman
value: 78.90277699624637
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.89126914997624
- type: cos_sim_spearman
value: 73.0296921832678
- type: euclidean_pearson
value: 71.50385903677738
- type: euclidean_spearman
value: 73.13368899716289
- type: manhattan_pearson
value: 71.47421463379519
- type: manhattan_spearman
value: 73.03383242946575
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 59.22923684492637
- type: cos_sim_spearman
value: 57.41013211368396
- type: euclidean_pearson
value: 61.21107388080905
- type: euclidean_spearman
value: 60.07620768697254
- type: manhattan_pearson
value: 59.60157142786555
- type: manhattan_spearman
value: 59.14069604103739
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.24345978774299
- type: cos_sim_spearman
value: 77.24225743830719
- type: euclidean_pearson
value: 76.66226095469165
- type: euclidean_spearman
value: 77.60708820493146
- type: manhattan_pearson
value: 76.05303324760429
- type: manhattan_spearman
value: 76.96353149912348
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.50879160160852
- type: cos_sim_spearman
value: 86.43594662965224
- type: euclidean_pearson
value: 86.06846012826577
- type: euclidean_spearman
value: 86.02041395794136
- type: manhattan_pearson
value: 86.10916255616904
- type: manhattan_spearman
value: 86.07346068198953
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 58.39803698977196
- type: cos_sim_spearman
value: 55.96910950423142
- type: euclidean_pearson
value: 58.17941175613059
- type: euclidean_spearman
value: 55.03019330522745
- type: manhattan_pearson
value: 57.333358138183286
- type: manhattan_spearman
value: 54.04614023149965
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 70.98304089637197
- type: cos_sim_spearman
value: 72.44071656215888
- type: euclidean_pearson
value: 72.19224359033983
- type: euclidean_spearman
value: 73.89871188913025
- type: manhattan_pearson
value: 71.21098311547406
- type: manhattan_spearman
value: 72.93405764824821
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.99792397466308
- type: cos_sim_spearman
value: 84.83824377879495
- type: euclidean_pearson
value: 85.70043288694438
- type: euclidean_spearman
value: 84.70627558703686
- type: manhattan_pearson
value: 85.89570850150801
- type: manhattan_spearman
value: 84.95806105313007
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.21850322994712
- type: cos_sim_spearman
value: 72.28669398117248
- type: euclidean_pearson
value: 73.40082510412948
- type: euclidean_spearman
value: 73.0326539281865
- type: manhattan_pearson
value: 71.8659633964841
- type: manhattan_spearman
value: 71.57817425823303
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.80921368595645
- type: cos_sim_spearman
value: 77.33209091229315
- type: euclidean_pearson
value: 76.53159540154829
- type: euclidean_spearman
value: 78.17960842810093
- type: manhattan_pearson
value: 76.13530186637601
- type: manhattan_spearman
value: 78.00701437666875
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.74980608267349
- type: cos_sim_spearman
value: 75.37597374318821
- type: euclidean_pearson
value: 74.90506081911661
- type: euclidean_spearman
value: 75.30151613124521
- type: manhattan_pearson
value: 74.62642745918002
- type: manhattan_spearman
value: 75.18619716592303
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.632662289205584
- type: cos_sim_spearman
value: 60.938543391610914
- type: euclidean_pearson
value: 62.113200529767056
- type: euclidean_spearman
value: 61.410312633261164
- type: manhattan_pearson
value: 61.75494698945686
- type: manhattan_spearman
value: 60.92726195322362
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 45.283470551557244
- type: cos_sim_spearman
value: 53.44833015864201
- type: euclidean_pearson
value: 41.17892011120893
- type: euclidean_spearman
value: 53.81441383126767
- type: manhattan_pearson
value: 41.17482200420659
- type: manhattan_spearman
value: 53.82180269276363
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.5069165306236
- type: cos_sim_spearman
value: 66.87803259033826
- type: euclidean_pearson
value: 63.5428979418236
- type: euclidean_spearman
value: 66.9293576586897
- type: manhattan_pearson
value: 63.59789526178922
- type: manhattan_spearman
value: 66.86555009875066
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 28.23026196280264
- type: cos_sim_spearman
value: 35.79397812652861
- type: euclidean_pearson
value: 17.828102102767353
- type: euclidean_spearman
value: 35.721501145568894
- type: manhattan_pearson
value: 17.77134274219677
- type: manhattan_spearman
value: 35.98107902846267
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 56.51946541393812
- type: cos_sim_spearman
value: 63.714686006214485
- type: euclidean_pearson
value: 58.32104651305898
- type: euclidean_spearman
value: 62.237110895702216
- type: manhattan_pearson
value: 58.579416468759185
- type: manhattan_spearman
value: 62.459738981727
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.76009839569795
- type: cos_sim_spearman
value: 56.65188431953149
- type: euclidean_pearson
value: 50.997682160915595
- type: euclidean_spearman
value: 55.99910008818135
- type: manhattan_pearson
value: 50.76220659606342
- type: manhattan_spearman
value: 55.517347595391456
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 51.232731157702425
- type: cos_sim_spearman
value: 59.89531877658345
- type: euclidean_pearson
value: 49.937914570348376
- type: euclidean_spearman
value: 60.220905659334036
- type: manhattan_pearson
value: 50.00987996844193
- type: manhattan_spearman
value: 60.081341480977926
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.717524559088005
- type: cos_sim_spearman
value: 66.83570886252286
- type: euclidean_pearson
value: 58.41338625505467
- type: euclidean_spearman
value: 66.68991427704938
- type: manhattan_pearson
value: 58.78638572916807
- type: manhattan_spearman
value: 66.58684161046335
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.2962042954962
- type: cos_sim_spearman
value: 76.58255504852025
- type: euclidean_pearson
value: 75.70983192778257
- type: euclidean_spearman
value: 77.4547684870542
- type: manhattan_pearson
value: 75.75565853870485
- type: manhattan_spearman
value: 76.90208974949428
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.47396266924846
- type: cos_sim_spearman
value: 56.492267162048606
- type: euclidean_pearson
value: 55.998505203070195
- type: euclidean_spearman
value: 56.46447012960222
- type: manhattan_pearson
value: 54.873172394430995
- type: manhattan_spearman
value: 56.58111534551218
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 69.87177267688686
- type: cos_sim_spearman
value: 74.57160943395763
- type: euclidean_pearson
value: 70.88330406826788
- type: euclidean_spearman
value: 74.29767636038422
- type: manhattan_pearson
value: 71.38245248369536
- type: manhattan_spearman
value: 74.53102232732175
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.80225656959544
- type: cos_sim_spearman
value: 76.52646173725735
- type: euclidean_pearson
value: 73.95710720200799
- type: euclidean_spearman
value: 76.54040031984111
- type: manhattan_pearson
value: 73.89679971946774
- type: manhattan_spearman
value: 76.60886958161574
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.70844249898789
- type: cos_sim_spearman
value: 72.68571783670241
- type: euclidean_pearson
value: 72.38800772441031
- type: euclidean_spearman
value: 72.86804422703312
- type: manhattan_pearson
value: 71.29840508203515
- type: manhattan_spearman
value: 71.86264441749513
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.647478923935694
- type: cos_sim_spearman
value: 63.74453623540931
- type: euclidean_pearson
value: 59.60138032437505
- type: euclidean_spearman
value: 63.947930832166065
- type: manhattan_pearson
value: 58.59735509491861
- type: manhattan_spearman
value: 62.082503844627404
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.8722516867162
- type: cos_sim_spearman
value: 71.81208592523012
- type: euclidean_pearson
value: 67.95315252165956
- type: euclidean_spearman
value: 73.00749822046009
- type: manhattan_pearson
value: 68.07884688638924
- type: manhattan_spearman
value: 72.34210325803069
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.5405814240949
- type: cos_sim_spearman
value: 60.56838649023775
- type: euclidean_pearson
value: 53.011731611314104
- type: euclidean_spearman
value: 58.533194841668426
- type: manhattan_pearson
value: 53.623067729338494
- type: manhattan_spearman
value: 58.018756154446926
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 13.611046866216112
- type: cos_sim_spearman
value: 28.238192909158492
- type: euclidean_pearson
value: 22.16189199885129
- type: euclidean_spearman
value: 35.012895679076564
- type: manhattan_pearson
value: 21.969771178698387
- type: manhattan_spearman
value: 32.456985088607475
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 74.58077407011655
- type: cos_sim_spearman
value: 84.51542547285167
- type: euclidean_pearson
value: 74.64613843596234
- type: euclidean_spearman
value: 84.51542547285167
- type: manhattan_pearson
value: 75.15335973101396
- type: manhattan_spearman
value: 84.51542547285167
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.0739825531578
- type: cos_sim_spearman
value: 84.01057479311115
- type: euclidean_pearson
value: 83.85453227433344
- type: euclidean_spearman
value: 84.01630226898655
- type: manhattan_pearson
value: 83.75323603028978
- type: manhattan_spearman
value: 83.89677983727685
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.12945623123957
- type: mrr
value: 93.87738713719106
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.983000000000004
- type: map_at_10
value: 62.946000000000005
- type: map_at_100
value: 63.514
- type: map_at_1000
value: 63.554
- type: map_at_3
value: 60.183
- type: map_at_5
value: 61.672000000000004
- type: mrr_at_1
value: 55.667
- type: mrr_at_10
value: 64.522
- type: mrr_at_100
value: 64.957
- type: mrr_at_1000
value: 64.995
- type: mrr_at_3
value: 62.388999999999996
- type: mrr_at_5
value: 63.639
- type: ndcg_at_1
value: 55.667
- type: ndcg_at_10
value: 67.704
- type: ndcg_at_100
value: 70.299
- type: ndcg_at_1000
value: 71.241
- type: ndcg_at_3
value: 62.866
- type: ndcg_at_5
value: 65.16999999999999
- type: precision_at_1
value: 55.667
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 24.444
- type: precision_at_5
value: 16.133
- type: recall_at_1
value: 52.983000000000004
- type: recall_at_10
value: 80.656
- type: recall_at_100
value: 92.5
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 67.744
- type: recall_at_5
value: 73.433
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72772277227723
- type: cos_sim_ap
value: 92.17845897992215
- type: cos_sim_f1
value: 85.9746835443038
- type: cos_sim_precision
value: 87.07692307692308
- type: cos_sim_recall
value: 84.89999999999999
- type: dot_accuracy
value: 99.3039603960396
- type: dot_ap
value: 60.70244020124878
- type: dot_f1
value: 59.92742353551063
- type: dot_precision
value: 62.21743810548978
- type: dot_recall
value: 57.8
- type: euclidean_accuracy
value: 99.71683168316832
- type: euclidean_ap
value: 91.53997039964659
- type: euclidean_f1
value: 84.88372093023257
- type: euclidean_precision
value: 90.02242152466367
- type: euclidean_recall
value: 80.30000000000001
- type: manhattan_accuracy
value: 99.72376237623763
- type: manhattan_ap
value: 91.80756777790289
- type: manhattan_f1
value: 85.48468106479157
- type: manhattan_precision
value: 85.8728557013118
- type: manhattan_recall
value: 85.1
- type: max_accuracy
value: 99.72772277227723
- type: max_ap
value: 92.17845897992215
- type: max_f1
value: 85.9746835443038
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 53.52464042600003
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.071631948736
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.19552407604654
- type: mrr
value: 49.95269130379425
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.345293033095427
- type: cos_sim_spearman
value: 29.976931423258403
- type: dot_pearson
value: 27.047078008958408
- type: dot_spearman
value: 27.75894368380218
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.706
- type: map_at_100
value: 9.634
- type: map_at_1000
value: 23.665
- type: map_at_3
value: 0.5950000000000001
- type: map_at_5
value: 0.95
- type: mrr_at_1
value: 86.0
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91.0
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 80.0
- type: ndcg_at_10
value: 72.573
- type: ndcg_at_100
value: 53.954
- type: ndcg_at_1000
value: 47.760999999999996
- type: ndcg_at_3
value: 76.173
- type: ndcg_at_5
value: 75.264
- type: precision_at_1
value: 86.0
- type: precision_at_10
value: 76.4
- type: precision_at_100
value: 55.50000000000001
- type: precision_at_1000
value: 21.802
- type: precision_at_3
value: 81.333
- type: precision_at_5
value: 80.4
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 1.925
- type: recall_at_100
value: 12.762
- type: recall_at_1000
value: 44.946000000000005
- type: recall_at_3
value: 0.634
- type: recall_at_5
value: 1.051
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.0
- type: f1
value: 88.55666666666666
- type: precision
value: 87.46166666666667
- type: recall
value: 91.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.22543352601156
- type: f1
value: 51.03220478943021
- type: precision
value: 48.8150289017341
- type: recall
value: 57.22543352601156
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.58536585365854
- type: f1
value: 39.66870798578116
- type: precision
value: 37.416085946573745
- type: recall
value: 46.58536585365854
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.7
- type: f1
value: 86.77999999999999
- type: precision
value: 85.45333333333332
- type: recall
value: 89.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.58333333333331
- type: precision
value: 96.2
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.3
- type: precision
value: 89.31666666666668
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.9
- type: f1
value: 83.67190476190476
- type: precision
value: 82.23333333333332
- type: recall
value: 86.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.0
- type: f1
value: 42.23229092632078
- type: precision
value: 39.851634683724235
- type: recall
value: 50.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.3
- type: f1
value: 70.86190476190477
- type: precision
value: 68.68777777777777
- type: recall
value: 76.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.073170731707314
- type: f1
value: 50.658958927251604
- type: precision
value: 48.26480836236933
- type: recall
value: 57.073170731707314
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.2
- type: f1
value: 62.156507936507936
- type: precision
value: 59.84964285714286
- type: recall
value: 68.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.52126366950182
- type: f1
value: 72.8496210148701
- type: precision
value: 70.92171498003819
- type: recall
value: 77.52126366950182
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.78260869565217
- type: f1
value: 65.32422360248447
- type: precision
value: 63.063067367415194
- type: recall
value: 70.78260869565217
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.43478260869566
- type: f1
value: 73.02608695652172
- type: precision
value: 70.63768115942028
- type: recall
value: 78.43478260869566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.9
- type: f1
value: 55.309753694581275
- type: precision
value: 53.130476190476195
- type: recall
value: 60.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.89999999999999
- type: f1
value: 67.92023809523809
- type: precision
value: 65.82595238095237
- type: recall
value: 72.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.80337756332931
- type: f1
value: 39.42174900558496
- type: precision
value: 36.97101116280851
- type: recall
value: 46.80337756332931
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.8
- type: f1
value: 86.79
- type: precision
value: 85.375
- type: recall
value: 89.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.199999999999996
- type: f1
value: 39.95484348984349
- type: precision
value: 37.561071428571424
- type: recall
value: 47.199999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.8
- type: f1
value: 84.68190476190475
- type: precision
value: 83.275
- type: recall
value: 87.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.76190476190476
- type: f1
value: 42.14965986394558
- type: precision
value: 39.96743626743626
- type: recall
value: 48.76190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.10000000000001
- type: f1
value: 59.58580086580086
- type: precision
value: 57.150238095238095
- type: recall
value: 66.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.3
- type: f1
value: 84.0
- type: precision
value: 82.48666666666666
- type: recall
value: 87.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 87.79523809523809
- type: precision
value: 86.6
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.0
- type: f1
value: 83.81
- type: precision
value: 82.36666666666666
- type: recall
value: 87.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.9
- type: f1
value: 57.76533189033189
- type: precision
value: 55.50595238095239
- type: recall
value: 63.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.1
- type: f1
value: 71.83690476190478
- type: precision
value: 70.04928571428573
- type: recall
value: 76.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.3
- type: f1
value: 59.32626984126984
- type: precision
value: 56.62535714285713
- type: recall
value: 66.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.60000000000001
- type: f1
value: 87.96333333333334
- type: precision
value: 86.73333333333333
- type: recall
value: 90.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.10000000000001
- type: precision
value: 90.16666666666666
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.71428571428571
- type: f1
value: 82.29142600436403
- type: precision
value: 80.8076626877166
- type: recall
value: 85.71428571428571
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.88888888888889
- type: f1
value: 85.7834757834758
- type: precision
value: 84.43732193732193
- type: recall
value: 88.88888888888889
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 85.67190476190476
- type: precision
value: 84.43333333333332
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.72727272727273
- type: f1
value: 78.21969696969695
- type: precision
value: 76.18181818181819
- type: recall
value: 82.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 61.0062893081761
- type: f1
value: 55.13976240391334
- type: precision
value: 52.92112499659669
- type: recall
value: 61.0062893081761
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.86666666666666
- type: precision
value: 85.69166666666668
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.54085603112841
- type: f1
value: 68.56031128404669
- type: precision
value: 66.53047989623866
- type: recall
value: 73.54085603112841
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 43.58974358974359
- type: f1
value: 36.45299145299145
- type: precision
value: 33.81155881155882
- type: recall
value: 43.58974358974359
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.599999999999994
- type: f1
value: 53.264689754689755
- type: precision
value: 50.869166666666665
- type: recall
value: 59.599999999999994
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.2
- type: f1
value: 81.61666666666665
- type: precision
value: 80.02833333333335
- type: recall
value: 85.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.78504672897196
- type: f1
value: 58.00029669188548
- type: precision
value: 55.815809968847354
- type: recall
value: 63.78504672897196
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.5
- type: f1
value: 61.518333333333345
- type: precision
value: 59.622363699102834
- type: recall
value: 66.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.6
- type: f1
value: 85.60222222222221
- type: precision
value: 84.27916666666665
- type: recall
value: 88.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.699999999999996
- type: f1
value: 52.732375957375965
- type: precision
value: 50.63214035964035
- type: recall
value: 58.699999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 89.99666666666667
- type: precision
value: 89.03333333333333
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.10000000000001
- type: f1
value: 87.55666666666667
- type: precision
value: 86.36166666666668
- type: recall
value: 90.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 88.89000000000001
- type: precision
value: 87.71166666666666
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.7
- type: f1
value: 60.67427750410509
- type: precision
value: 58.71785714285714
- type: recall
value: 65.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.39999999999999
- type: f1
value: 81.93190476190475
- type: precision
value: 80.37833333333333
- type: recall
value: 85.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.833333333333336
- type: f1
value: 42.006625781625786
- type: precision
value: 40.077380952380956
- type: recall
value: 47.833333333333336
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 10.4
- type: f1
value: 8.24465007215007
- type: precision
value: 7.664597069597071
- type: recall
value: 10.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.6
- type: f1
value: 77.76333333333334
- type: precision
value: 75.57833333333332
- type: recall
value: 82.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 52.67857142857143
- type: f1
value: 44.302721088435376
- type: precision
value: 41.49801587301587
- type: recall
value: 52.67857142857143
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 28.3205268935236
- type: f1
value: 22.426666605171157
- type: precision
value: 20.685900116470915
- type: recall
value: 28.3205268935236
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 22.7
- type: f1
value: 17.833970473970474
- type: precision
value: 16.407335164835164
- type: recall
value: 22.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 89.92999999999999
- type: precision
value: 88.87
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 89.25
- type: precision
value: 88.21666666666667
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.19999999999999
- type: f1
value: 63.38269841269841
- type: precision
value: 61.14773809523809
- type: recall
value: 69.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.8
- type: f1
value: 42.839915639915645
- type: precision
value: 40.770287114845935
- type: recall
value: 48.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.8
- type: f1
value: 85.90666666666668
- type: precision
value: 84.54166666666666
- type: recall
value: 88.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.6
- type: f1
value: 40.85892920804686
- type: precision
value: 38.838223114604695
- type: recall
value: 46.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.0
- type: f1
value: 80.14190476190475
- type: precision
value: 78.45333333333333
- type: recall
value: 84.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5
- type: f1
value: 87.78333333333333
- type: precision
value: 86.5
- type: recall
value: 90.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.5
- type: f1
value: 69.48397546897547
- type: precision
value: 67.51869047619049
- type: recall
value: 74.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 32.846715328467155
- type: f1
value: 27.828177499710343
- type: precision
value: 26.63451511991658
- type: recall
value: 32.846715328467155
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.0
- type: f1
value: 6.07664116764988
- type: precision
value: 5.544177607179943
- type: recall
value: 8.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.38555555555554
- type: precision
value: 82.91583333333334
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.5
- type: f1
value: 84.08333333333331
- type: precision
value: 82.47333333333333
- type: recall
value: 87.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.95238095238095
- type: f1
value: 76.13095238095238
- type: precision
value: 74.05753968253967
- type: recall
value: 80.95238095238095
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.799999999999999
- type: f1
value: 6.971422975172975
- type: precision
value: 6.557814916172301
- type: recall
value: 8.799999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.099378881987576
- type: f1
value: 37.01649742022413
- type: precision
value: 34.69420618488942
- type: recall
value: 44.099378881987576
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.3
- type: f1
value: 80.32666666666667
- type: precision
value: 78.60666666666665
- type: recall
value: 84.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.5
- type: f1
value: 90.49666666666666
- type: precision
value: 89.56666666666668
- type: recall
value: 92.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 10.0
- type: f1
value: 8.268423529875141
- type: precision
value: 7.878118605532398
- type: recall
value: 10.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.22077922077922
- type: f1
value: 74.27128427128426
- type: precision
value: 72.28715728715729
- type: recall
value: 79.22077922077922
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.64885496183206
- type: f1
value: 58.87495456197747
- type: precision
value: 55.992366412213734
- type: recall
value: 65.64885496183206
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.06986899563319
- type: f1
value: 94.78408539543909
- type: precision
value: 94.15332362930616
- type: recall
value: 96.06986899563319
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.2
- type: f1
value: 71.72571428571428
- type: precision
value: 69.41000000000001
- type: recall
value: 77.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.4406779661017
- type: f1
value: 83.2391713747646
- type: precision
value: 81.74199623352166
- type: recall
value: 86.4406779661017
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.4
- type: f1
value: 6.017828743398003
- type: precision
value: 5.4829865484756795
- type: recall
value: 8.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.5
- type: f1
value: 79.74833333333333
- type: precision
value: 78.04837662337664
- type: recall
value: 83.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.4
- type: f1
value: 54.467301587301584
- type: precision
value: 52.23242424242424
- type: recall
value: 60.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.9
- type: f1
value: 69.68699134199134
- type: precision
value: 67.59873015873016
- type: recall
value: 74.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.0
- type: f1
value: 84.9652380952381
- type: precision
value: 83.66166666666666
- type: recall
value: 88.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.1
- type: f1
value: 7.681244588744588
- type: precision
value: 7.370043290043291
- type: recall
value: 9.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.9651474530831
- type: f1
value: 76.84220605132133
- type: precision
value: 75.19606398962966
- type: recall
value: 80.9651474530831
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.9
- type: f1
value: 83.705
- type: precision
value: 82.3120634920635
- type: recall
value: 86.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 29.64426877470356
- type: f1
value: 23.98763072676116
- type: precision
value: 22.506399397703746
- type: recall
value: 29.64426877470356
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.4225352112676
- type: f1
value: 62.84037558685445
- type: precision
value: 59.56572769953053
- type: recall
value: 70.4225352112676
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 19.64071856287425
- type: f1
value: 15.125271011207756
- type: precision
value: 13.865019261197494
- type: recall
value: 19.64071856287425
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.80666666666666
- type: precision
value: 86.70833333333331
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.15270935960591
- type: f1
value: 18.407224958949097
- type: precision
value: 16.982385430661292
- type: recall
value: 23.15270935960591
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.98591549295775
- type: f1
value: 49.94718309859154
- type: precision
value: 47.77864154624717
- type: recall
value: 55.98591549295775
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.07692307692307
- type: f1
value: 66.74358974358974
- type: precision
value: 64.06837606837607
- type: recall
value: 73.07692307692307
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.25
- type: precision
value: 92.43333333333332
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.78705636743215
- type: f1
value: 31.63899658680452
- type: precision
value: 29.72264397629742
- type: recall
value: 37.78705636743215
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.6
- type: f1
value: 16.91697302697303
- type: precision
value: 15.71225147075147
- type: recall
value: 21.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.01628664495115
- type: f1
value: 81.38514037536838
- type: precision
value: 79.83170466883823
- type: recall
value: 85.01628664495115
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.39999999999999
- type: f1
value: 79.96380952380952
- type: precision
value: 78.48333333333333
- type: recall
value: 83.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.2
- type: f1
value: 79.26190476190476
- type: precision
value: 77.58833333333334
- type: recall
value: 83.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.59055118110236
- type: f1
value: 71.66854143232096
- type: precision
value: 70.30183727034121
- type: recall
value: 75.59055118110236
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.5
- type: f1
value: 59.26095238095238
- type: precision
value: 56.81909090909092
- type: recall
value: 65.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.26315789473685
- type: f1
value: 47.986523325858506
- type: precision
value: 45.33950006595436
- type: recall
value: 55.26315789473685
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.89999999999999
- type: f1
value: 78.835
- type: precision
value: 77.04761904761905
- type: recall
value: 82.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 43.269230769230774
- type: f1
value: 36.20421245421245
- type: precision
value: 33.57371794871795
- type: recall
value: 43.269230769230774
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.0
- type: f1
value: 84.70666666666666
- type: precision
value: 83.23166666666665
- type: recall
value: 88.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.4
- type: f1
value: 72.54666666666667
- type: precision
value: 70.54318181818181
- type: recall
value: 77.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.60000000000001
- type: f1
value: 74.1588888888889
- type: precision
value: 72.30250000000001
- type: recall
value: 78.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.40566037735849
- type: f1
value: 66.82587328813744
- type: precision
value: 64.75039308176099
- type: recall
value: 72.40566037735849
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.8
- type: f1
value: 68.56357142857144
- type: precision
value: 66.3178822055138
- type: recall
value: 73.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.78832116788321
- type: f1
value: 89.3552311435523
- type: precision
value: 88.20559610705597
- type: recall
value: 91.78832116788321
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.3
- type: f1
value: 69.05085581085581
- type: precision
value: 66.955
- type: recall
value: 74.3
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.896
- type: map_at_10
value: 8.993
- type: map_at_100
value: 14.133999999999999
- type: map_at_1000
value: 15.668000000000001
- type: map_at_3
value: 5.862
- type: map_at_5
value: 7.17
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 42.931000000000004
- type: mrr_at_100
value: 44.81
- type: mrr_at_1000
value: 44.81
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 41.701
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 21.163
- type: ndcg_at_100
value: 33.306000000000004
- type: ndcg_at_1000
value: 45.275999999999996
- type: ndcg_at_3
value: 25.685999999999996
- type: ndcg_at_5
value: 23.732
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 17.755000000000003
- type: precision_at_100
value: 6.938999999999999
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 25.85
- type: precision_at_5
value: 23.265
- type: recall_at_1
value: 2.896
- type: recall_at_10
value: 13.333999999999998
- type: recall_at_100
value: 43.517
- type: recall_at_1000
value: 79.836
- type: recall_at_3
value: 6.306000000000001
- type: recall_at_5
value: 8.825
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.3874
- type: ap
value: 13.829909072469423
- type: f1
value: 53.54534203543492
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.62026032823995
- type: f1
value: 62.85251350485221
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 33.21527881409797
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.97943613280086
- type: cos_sim_ap
value: 70.75454316885921
- type: cos_sim_f1
value: 65.38274012676743
- type: cos_sim_precision
value: 60.761214318078835
- type: cos_sim_recall
value: 70.76517150395777
- type: dot_accuracy
value: 79.0546581629612
- type: dot_ap
value: 47.3197121792147
- type: dot_f1
value: 49.20106524633821
- type: dot_precision
value: 42.45499808502489
- type: dot_recall
value: 58.49604221635884
- type: euclidean_accuracy
value: 85.08076533349228
- type: euclidean_ap
value: 70.95016106374474
- type: euclidean_f1
value: 65.43987900176455
- type: euclidean_precision
value: 62.64478764478765
- type: euclidean_recall
value: 68.49604221635884
- type: manhattan_accuracy
value: 84.93771234428085
- type: manhattan_ap
value: 70.63668388755362
- type: manhattan_f1
value: 65.23895401262398
- type: manhattan_precision
value: 56.946084218811485
- type: manhattan_recall
value: 76.35883905013192
- type: max_accuracy
value: 85.08076533349228
- type: max_ap
value: 70.95016106374474
- type: max_f1
value: 65.43987900176455
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.69096130709822
- type: cos_sim_ap
value: 84.82526278228542
- type: cos_sim_f1
value: 77.65485060585536
- type: cos_sim_precision
value: 75.94582658619167
- type: cos_sim_recall
value: 79.44256236526024
- type: dot_accuracy
value: 80.97954748321496
- type: dot_ap
value: 64.81642914145866
- type: dot_f1
value: 60.631996987229975
- type: dot_precision
value: 54.5897293631712
- type: dot_recall
value: 68.17831844779796
- type: euclidean_accuracy
value: 88.6987231730508
- type: euclidean_ap
value: 84.80003825477253
- type: euclidean_f1
value: 77.67194179854496
- type: euclidean_precision
value: 75.7128235122094
- type: euclidean_recall
value: 79.73514012935017
- type: manhattan_accuracy
value: 88.62692591298949
- type: manhattan_ap
value: 84.80451408255276
- type: manhattan_f1
value: 77.69888949572183
- type: manhattan_precision
value: 73.70311528631622
- type: manhattan_recall
value: 82.15275639051433
- type: max_accuracy
value: 88.6987231730508
- type: max_ap
value: 84.82526278228542
- type: max_f1
value: 77.69888949572183
---
# multilingual-e5-small-mlx
This model was converted to MLX format from [`intfloat/multilingual-e5-small`]().
Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-small) for more details on the model.
## Use with mlx
```bash
pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/multilingual-e5-small-mlx --prompt "My name is"
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
jinaai/jina-embeddings-v2-base-de | jinaai | feature-extraction | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"fill-mask",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"custom_code",
"de",
"en",
"arxiv:2108.12409",
"arxiv:2402.17016",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"region:eu"
] | 1,705 | 1,736 | 49,768 | 72 | ---
language:
- de
- en
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
inference: false
model-index:
- name: jina-embeddings-v2-base-de
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.76119402985076
- type: ap
value: 35.99577188521176
- type: f1
value: 67.50397431543269
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.9186295503212
- type: ap
value: 79.73307115840507
- type: f1
value: 66.66245744831339
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 77.52215
- type: ap
value: 71.85051037177416
- type: f1
value: 77.4171096157774
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.498
- type: f1
value: 38.058193386555956
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.717999999999996
- type: f1
value: 37.22674371574757
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.319999999999997
- type: map_at_10
value: 40.351
- type: map_at_100
value: 41.435
- type: map_at_1000
value: 41.443000000000005
- type: map_at_3
value: 35.266
- type: map_at_5
value: 37.99
- type: mrr_at_1
value: 25.746999999999996
- type: mrr_at_10
value: 40.515
- type: mrr_at_100
value: 41.606
- type: mrr_at_1000
value: 41.614000000000004
- type: mrr_at_3
value: 35.42
- type: mrr_at_5
value: 38.112
- type: ndcg_at_1
value: 25.319999999999997
- type: ndcg_at_10
value: 49.332
- type: ndcg_at_100
value: 53.909
- type: ndcg_at_1000
value: 54.089
- type: ndcg_at_3
value: 38.705
- type: ndcg_at_5
value: 43.606
- type: precision_at_1
value: 25.319999999999997
- type: precision_at_10
value: 7.831
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 16.24
- type: precision_at_5
value: 12.119
- type: recall_at_1
value: 25.319999999999997
- type: recall_at_10
value: 78.307
- type: recall_at_100
value: 98.222
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 48.72
- type: recall_at_5
value: 60.597
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 41.43100588255654
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.08988904593667
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.55514765595906
- type: mrr
value: 73.51393835465858
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 79.6723823121172
- type: cos_sim_spearman
value: 76.90596922214986
- type: euclidean_pearson
value: 77.87910737957918
- type: euclidean_spearman
value: 76.66319260598262
- type: manhattan_pearson
value: 77.37039493457965
- type: manhattan_spearman
value: 76.09872191280964
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.97703549060543
- type: f1
value: 98.86569241475296
- type: precision
value: 98.81002087682673
- type: recall
value: 98.97703549060543
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 83.93506493506493
- type: f1
value: 83.91014949949302
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 34.970675877585144
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 28.779230269190954
- task:
type: Clustering
dataset:
name: MTEB BlurbsClusteringP2P
type: slvnwhrl/blurbs-clustering-p2p
config: default
split: test
revision: a2dd5b02a77de3466a3eaa98ae586b5610314496
metrics:
- type: v_measure
value: 35.490175601567216
- task:
type: Clustering
dataset:
name: MTEB BlurbsClusteringS2S
type: slvnwhrl/blurbs-clustering-s2s
config: default
split: test
revision: 9bfff9a7f8f6dc6ffc9da71c48dd48b68696471d
metrics:
- type: v_measure
value: 16.16638280560168
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.830999999999996
- type: map_at_10
value: 41.355
- type: map_at_100
value: 42.791000000000004
- type: map_at_1000
value: 42.918
- type: map_at_3
value: 38.237
- type: map_at_5
value: 40.066
- type: mrr_at_1
value: 38.484
- type: mrr_at_10
value: 47.593
- type: mrr_at_100
value: 48.388
- type: mrr_at_1000
value: 48.439
- type: mrr_at_3
value: 45.279
- type: mrr_at_5
value: 46.724
- type: ndcg_at_1
value: 38.484
- type: ndcg_at_10
value: 47.27
- type: ndcg_at_100
value: 52.568000000000005
- type: ndcg_at_1000
value: 54.729000000000006
- type: ndcg_at_3
value: 43.061
- type: ndcg_at_5
value: 45.083
- type: precision_at_1
value: 38.484
- type: precision_at_10
value: 8.927
- type: precision_at_100
value: 1.425
- type: precision_at_1000
value: 0.19
- type: precision_at_3
value: 20.791999999999998
- type: precision_at_5
value: 14.85
- type: recall_at_1
value: 30.830999999999996
- type: recall_at_10
value: 57.87799999999999
- type: recall_at_100
value: 80.124
- type: recall_at_1000
value: 94.208
- type: recall_at_3
value: 45.083
- type: recall_at_5
value: 51.154999999999994
- type: map_at_1
value: 25.782
- type: map_at_10
value: 34.492
- type: map_at_100
value: 35.521
- type: map_at_1000
value: 35.638
- type: map_at_3
value: 31.735999999999997
- type: map_at_5
value: 33.339
- type: mrr_at_1
value: 32.357
- type: mrr_at_10
value: 39.965
- type: mrr_at_100
value: 40.644000000000005
- type: mrr_at_1000
value: 40.695
- type: mrr_at_3
value: 37.739
- type: mrr_at_5
value: 39.061
- type: ndcg_at_1
value: 32.357
- type: ndcg_at_10
value: 39.644
- type: ndcg_at_100
value: 43.851
- type: ndcg_at_1000
value: 46.211999999999996
- type: ndcg_at_3
value: 35.675000000000004
- type: ndcg_at_5
value: 37.564
- type: precision_at_1
value: 32.357
- type: precision_at_10
value: 7.344
- type: precision_at_100
value: 1.201
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 17.155
- type: precision_at_5
value: 12.166
- type: recall_at_1
value: 25.782
- type: recall_at_10
value: 49.132999999999996
- type: recall_at_100
value: 67.24
- type: recall_at_1000
value: 83.045
- type: recall_at_3
value: 37.021
- type: recall_at_5
value: 42.548
- type: map_at_1
value: 35.778999999999996
- type: map_at_10
value: 47.038000000000004
- type: map_at_100
value: 48.064
- type: map_at_1000
value: 48.128
- type: map_at_3
value: 44.186
- type: map_at_5
value: 45.788000000000004
- type: mrr_at_1
value: 41.254000000000005
- type: mrr_at_10
value: 50.556999999999995
- type: mrr_at_100
value: 51.296
- type: mrr_at_1000
value: 51.331
- type: mrr_at_3
value: 48.318
- type: mrr_at_5
value: 49.619
- type: ndcg_at_1
value: 41.254000000000005
- type: ndcg_at_10
value: 52.454
- type: ndcg_at_100
value: 56.776
- type: ndcg_at_1000
value: 58.181000000000004
- type: ndcg_at_3
value: 47.713
- type: ndcg_at_5
value: 49.997
- type: precision_at_1
value: 41.254000000000005
- type: precision_at_10
value: 8.464
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 21.526
- type: precision_at_5
value: 14.696000000000002
- type: recall_at_1
value: 35.778999999999996
- type: recall_at_10
value: 64.85300000000001
- type: recall_at_100
value: 83.98400000000001
- type: recall_at_1000
value: 94.18299999999999
- type: recall_at_3
value: 51.929
- type: recall_at_5
value: 57.666
- type: map_at_1
value: 21.719
- type: map_at_10
value: 29.326999999999998
- type: map_at_100
value: 30.314000000000004
- type: map_at_1000
value: 30.397000000000002
- type: map_at_3
value: 27.101
- type: map_at_5
value: 28.141
- type: mrr_at_1
value: 23.503
- type: mrr_at_10
value: 31.225
- type: mrr_at_100
value: 32.096000000000004
- type: mrr_at_1000
value: 32.159
- type: mrr_at_3
value: 29.076999999999998
- type: mrr_at_5
value: 30.083
- type: ndcg_at_1
value: 23.503
- type: ndcg_at_10
value: 33.842
- type: ndcg_at_100
value: 39.038000000000004
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 29.347
- type: ndcg_at_5
value: 31.121
- type: precision_at_1
value: 23.503
- type: precision_at_10
value: 5.266
- type: precision_at_100
value: 0.831
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 12.504999999999999
- type: precision_at_5
value: 8.565000000000001
- type: recall_at_1
value: 21.719
- type: recall_at_10
value: 46.024
- type: recall_at_100
value: 70.78999999999999
- type: recall_at_1000
value: 87.022
- type: recall_at_3
value: 33.64
- type: recall_at_5
value: 37.992
- type: map_at_1
value: 15.601
- type: map_at_10
value: 22.054000000000002
- type: map_at_100
value: 23.177
- type: map_at_1000
value: 23.308
- type: map_at_3
value: 19.772000000000002
- type: map_at_5
value: 21.055
- type: mrr_at_1
value: 19.403000000000002
- type: mrr_at_10
value: 26.409
- type: mrr_at_100
value: 27.356
- type: mrr_at_1000
value: 27.441
- type: mrr_at_3
value: 24.108999999999998
- type: mrr_at_5
value: 25.427
- type: ndcg_at_1
value: 19.403000000000002
- type: ndcg_at_10
value: 26.474999999999998
- type: ndcg_at_100
value: 32.086
- type: ndcg_at_1000
value: 35.231
- type: ndcg_at_3
value: 22.289
- type: ndcg_at_5
value: 24.271
- type: precision_at_1
value: 19.403000000000002
- type: precision_at_10
value: 4.813
- type: precision_at_100
value: 0.8869999999999999
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 10.531
- type: precision_at_5
value: 7.710999999999999
- type: recall_at_1
value: 15.601
- type: recall_at_10
value: 35.916
- type: recall_at_100
value: 60.8
- type: recall_at_1000
value: 83.245
- type: recall_at_3
value: 24.321
- type: recall_at_5
value: 29.372999999999998
- type: map_at_1
value: 25.522
- type: map_at_10
value: 34.854
- type: map_at_100
value: 36.269
- type: map_at_1000
value: 36.387
- type: map_at_3
value: 32.187
- type: map_at_5
value: 33.692
- type: mrr_at_1
value: 31.375999999999998
- type: mrr_at_10
value: 40.471000000000004
- type: mrr_at_100
value: 41.481
- type: mrr_at_1000
value: 41.533
- type: mrr_at_3
value: 38.274
- type: mrr_at_5
value: 39.612
- type: ndcg_at_1
value: 31.375999999999998
- type: ndcg_at_10
value: 40.298
- type: ndcg_at_100
value: 46.255
- type: ndcg_at_1000
value: 48.522
- type: ndcg_at_3
value: 36.049
- type: ndcg_at_5
value: 38.095
- type: precision_at_1
value: 31.375999999999998
- type: precision_at_10
value: 7.305000000000001
- type: precision_at_100
value: 1.201
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 17.132
- type: precision_at_5
value: 12.107999999999999
- type: recall_at_1
value: 25.522
- type: recall_at_10
value: 50.988
- type: recall_at_100
value: 76.005
- type: recall_at_1000
value: 91.11200000000001
- type: recall_at_3
value: 38.808
- type: recall_at_5
value: 44.279
- type: map_at_1
value: 24.615000000000002
- type: map_at_10
value: 32.843
- type: map_at_100
value: 34.172999999999995
- type: map_at_1000
value: 34.286
- type: map_at_3
value: 30.125
- type: map_at_5
value: 31.495
- type: mrr_at_1
value: 30.023
- type: mrr_at_10
value: 38.106
- type: mrr_at_100
value: 39.01
- type: mrr_at_1000
value: 39.071
- type: mrr_at_3
value: 35.674
- type: mrr_at_5
value: 36.924
- type: ndcg_at_1
value: 30.023
- type: ndcg_at_10
value: 38.091
- type: ndcg_at_100
value: 43.771
- type: ndcg_at_1000
value: 46.315
- type: ndcg_at_3
value: 33.507
- type: ndcg_at_5
value: 35.304
- type: precision_at_1
value: 30.023
- type: precision_at_10
value: 6.837999999999999
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.152
- type: precision_at_3
value: 15.562999999999999
- type: precision_at_5
value: 10.936
- type: recall_at_1
value: 24.615000000000002
- type: recall_at_10
value: 48.691
- type: recall_at_100
value: 72.884
- type: recall_at_1000
value: 90.387
- type: recall_at_3
value: 35.659
- type: recall_at_5
value: 40.602
- type: map_at_1
value: 23.223666666666666
- type: map_at_10
value: 31.338166666666673
- type: map_at_100
value: 32.47358333333333
- type: map_at_1000
value: 32.5955
- type: map_at_3
value: 28.84133333333333
- type: map_at_5
value: 30.20808333333333
- type: mrr_at_1
value: 27.62483333333333
- type: mrr_at_10
value: 35.385916666666674
- type: mrr_at_100
value: 36.23325
- type: mrr_at_1000
value: 36.29966666666667
- type: mrr_at_3
value: 33.16583333333333
- type: mrr_at_5
value: 34.41983333333334
- type: ndcg_at_1
value: 27.62483333333333
- type: ndcg_at_10
value: 36.222
- type: ndcg_at_100
value: 41.29491666666666
- type: ndcg_at_1000
value: 43.85508333333333
- type: ndcg_at_3
value: 31.95116666666667
- type: ndcg_at_5
value: 33.88541666666667
- type: precision_at_1
value: 27.62483333333333
- type: precision_at_10
value: 6.339916666666667
- type: precision_at_100
value: 1.0483333333333333
- type: precision_at_1000
value: 0.14608333333333334
- type: precision_at_3
value: 14.726500000000003
- type: precision_at_5
value: 10.395
- type: recall_at_1
value: 23.223666666666666
- type: recall_at_10
value: 46.778999999999996
- type: recall_at_100
value: 69.27141666666667
- type: recall_at_1000
value: 87.27383333333334
- type: recall_at_3
value: 34.678749999999994
- type: recall_at_5
value: 39.79900000000001
- type: map_at_1
value: 21.677
- type: map_at_10
value: 27.828000000000003
- type: map_at_100
value: 28.538999999999998
- type: map_at_1000
value: 28.64
- type: map_at_3
value: 26.105
- type: map_at_5
value: 27.009
- type: mrr_at_1
value: 24.387
- type: mrr_at_10
value: 30.209999999999997
- type: mrr_at_100
value: 30.953000000000003
- type: mrr_at_1000
value: 31.029
- type: mrr_at_3
value: 28.707
- type: mrr_at_5
value: 29.610999999999997
- type: ndcg_at_1
value: 24.387
- type: ndcg_at_10
value: 31.378
- type: ndcg_at_100
value: 35.249
- type: ndcg_at_1000
value: 37.923
- type: ndcg_at_3
value: 28.213
- type: ndcg_at_5
value: 29.658
- type: precision_at_1
value: 24.387
- type: precision_at_10
value: 4.8309999999999995
- type: precision_at_100
value: 0.73
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.168
- type: precision_at_5
value: 8.251999999999999
- type: recall_at_1
value: 21.677
- type: recall_at_10
value: 40.069
- type: recall_at_100
value: 58.077
- type: recall_at_1000
value: 77.97
- type: recall_at_3
value: 31.03
- type: recall_at_5
value: 34.838
- type: map_at_1
value: 14.484
- type: map_at_10
value: 20.355
- type: map_at_100
value: 21.382
- type: map_at_1000
value: 21.511
- type: map_at_3
value: 18.448
- type: map_at_5
value: 19.451999999999998
- type: mrr_at_1
value: 17.584
- type: mrr_at_10
value: 23.825
- type: mrr_at_100
value: 24.704
- type: mrr_at_1000
value: 24.793000000000003
- type: mrr_at_3
value: 21.92
- type: mrr_at_5
value: 22.97
- type: ndcg_at_1
value: 17.584
- type: ndcg_at_10
value: 24.315
- type: ndcg_at_100
value: 29.354999999999997
- type: ndcg_at_1000
value: 32.641999999999996
- type: ndcg_at_3
value: 20.802
- type: ndcg_at_5
value: 22.335
- type: precision_at_1
value: 17.584
- type: precision_at_10
value: 4.443
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 9.807
- type: precision_at_5
value: 7.0889999999999995
- type: recall_at_1
value: 14.484
- type: recall_at_10
value: 32.804
- type: recall_at_100
value: 55.679
- type: recall_at_1000
value: 79.63
- type: recall_at_3
value: 22.976
- type: recall_at_5
value: 26.939
- type: map_at_1
value: 22.983999999999998
- type: map_at_10
value: 30.812
- type: map_at_100
value: 31.938
- type: map_at_1000
value: 32.056000000000004
- type: map_at_3
value: 28.449999999999996
- type: map_at_5
value: 29.542
- type: mrr_at_1
value: 27.145999999999997
- type: mrr_at_10
value: 34.782999999999994
- type: mrr_at_100
value: 35.699
- type: mrr_at_1000
value: 35.768
- type: mrr_at_3
value: 32.572
- type: mrr_at_5
value: 33.607
- type: ndcg_at_1
value: 27.145999999999997
- type: ndcg_at_10
value: 35.722
- type: ndcg_at_100
value: 40.964
- type: ndcg_at_1000
value: 43.598
- type: ndcg_at_3
value: 31.379
- type: ndcg_at_5
value: 32.924
- type: precision_at_1
value: 27.145999999999997
- type: precision_at_10
value: 6.063000000000001
- type: precision_at_100
value: 0.9730000000000001
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 14.366000000000001
- type: precision_at_5
value: 9.776
- type: recall_at_1
value: 22.983999999999998
- type: recall_at_10
value: 46.876
- type: recall_at_100
value: 69.646
- type: recall_at_1000
value: 88.305
- type: recall_at_3
value: 34.471000000000004
- type: recall_at_5
value: 38.76
- type: map_at_1
value: 23.017000000000003
- type: map_at_10
value: 31.049
- type: map_at_100
value: 32.582
- type: map_at_1000
value: 32.817
- type: map_at_3
value: 28.303
- type: map_at_5
value: 29.854000000000003
- type: mrr_at_1
value: 27.866000000000003
- type: mrr_at_10
value: 35.56
- type: mrr_at_100
value: 36.453
- type: mrr_at_1000
value: 36.519
- type: mrr_at_3
value: 32.938
- type: mrr_at_5
value: 34.391
- type: ndcg_at_1
value: 27.866000000000003
- type: ndcg_at_10
value: 36.506
- type: ndcg_at_100
value: 42.344
- type: ndcg_at_1000
value: 45.213
- type: ndcg_at_3
value: 31.805
- type: ndcg_at_5
value: 33.933
- type: precision_at_1
value: 27.866000000000003
- type: precision_at_10
value: 7.016
- type: precision_at_100
value: 1.468
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 14.822
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.017000000000003
- type: recall_at_10
value: 47.053
- type: recall_at_100
value: 73.177
- type: recall_at_1000
value: 91.47800000000001
- type: recall_at_3
value: 33.675
- type: recall_at_5
value: 39.36
- type: map_at_1
value: 16.673
- type: map_at_10
value: 24.051000000000002
- type: map_at_100
value: 24.933
- type: map_at_1000
value: 25.06
- type: map_at_3
value: 21.446
- type: map_at_5
value: 23.064
- type: mrr_at_1
value: 18.115000000000002
- type: mrr_at_10
value: 25.927
- type: mrr_at_100
value: 26.718999999999998
- type: mrr_at_1000
value: 26.817999999999998
- type: mrr_at_3
value: 23.383000000000003
- type: mrr_at_5
value: 25.008999999999997
- type: ndcg_at_1
value: 18.115000000000002
- type: ndcg_at_10
value: 28.669
- type: ndcg_at_100
value: 33.282000000000004
- type: ndcg_at_1000
value: 36.481
- type: ndcg_at_3
value: 23.574
- type: ndcg_at_5
value: 26.340000000000003
- type: precision_at_1
value: 18.115000000000002
- type: precision_at_10
value: 4.769
- type: precision_at_100
value: 0.767
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 10.351
- type: precision_at_5
value: 7.8
- type: recall_at_1
value: 16.673
- type: recall_at_10
value: 41.063
- type: recall_at_100
value: 62.851
- type: recall_at_1000
value: 86.701
- type: recall_at_3
value: 27.532
- type: recall_at_5
value: 34.076
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.752
- type: map_at_10
value: 15.120000000000001
- type: map_at_100
value: 16.678
- type: map_at_1000
value: 16.854
- type: map_at_3
value: 12.603
- type: map_at_5
value: 13.918
- type: mrr_at_1
value: 19.283
- type: mrr_at_10
value: 29.145
- type: mrr_at_100
value: 30.281000000000002
- type: mrr_at_1000
value: 30.339
- type: mrr_at_3
value: 26.069
- type: mrr_at_5
value: 27.864
- type: ndcg_at_1
value: 19.283
- type: ndcg_at_10
value: 21.804000000000002
- type: ndcg_at_100
value: 28.576
- type: ndcg_at_1000
value: 32.063
- type: ndcg_at_3
value: 17.511
- type: ndcg_at_5
value: 19.112000000000002
- type: precision_at_1
value: 19.283
- type: precision_at_10
value: 6.873
- type: precision_at_100
value: 1.405
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 13.16
- type: precision_at_5
value: 10.189
- type: recall_at_1
value: 8.752
- type: recall_at_10
value: 27.004
- type: recall_at_100
value: 50.648
- type: recall_at_1000
value: 70.458
- type: recall_at_3
value: 16.461000000000002
- type: recall_at_5
value: 20.973
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.81
- type: map_at_10
value: 14.056
- type: map_at_100
value: 18.961
- type: map_at_1000
value: 20.169
- type: map_at_3
value: 10.496
- type: map_at_5
value: 11.952
- type: mrr_at_1
value: 53.5
- type: mrr_at_10
value: 63.479
- type: mrr_at_100
value: 63.971999999999994
- type: mrr_at_1000
value: 63.993
- type: mrr_at_3
value: 61.541999999999994
- type: mrr_at_5
value: 62.778999999999996
- type: ndcg_at_1
value: 42.25
- type: ndcg_at_10
value: 31.471
- type: ndcg_at_100
value: 35.115
- type: ndcg_at_1000
value: 42.408
- type: ndcg_at_3
value: 35.458
- type: ndcg_at_5
value: 32.973
- type: precision_at_1
value: 53.5
- type: precision_at_10
value: 24.85
- type: precision_at_100
value: 7.79
- type: precision_at_1000
value: 1.599
- type: precision_at_3
value: 38.667
- type: precision_at_5
value: 31.55
- type: recall_at_1
value: 6.81
- type: recall_at_10
value: 19.344
- type: recall_at_100
value: 40.837
- type: recall_at_1000
value: 64.661
- type: recall_at_3
value: 11.942
- type: recall_at_5
value: 14.646
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 44.64499999999999
- type: f1
value: 39.39106911352714
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 48.196
- type: map_at_10
value: 61.404
- type: map_at_100
value: 61.846000000000004
- type: map_at_1000
value: 61.866
- type: map_at_3
value: 58.975
- type: map_at_5
value: 60.525
- type: mrr_at_1
value: 52.025
- type: mrr_at_10
value: 65.43299999999999
- type: mrr_at_100
value: 65.80799999999999
- type: mrr_at_1000
value: 65.818
- type: mrr_at_3
value: 63.146
- type: mrr_at_5
value: 64.64
- type: ndcg_at_1
value: 52.025
- type: ndcg_at_10
value: 67.889
- type: ndcg_at_100
value: 69.864
- type: ndcg_at_1000
value: 70.337
- type: ndcg_at_3
value: 63.315
- type: ndcg_at_5
value: 65.91799999999999
- type: precision_at_1
value: 52.025
- type: precision_at_10
value: 9.182
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 25.968000000000004
- type: precision_at_5
value: 17.006
- type: recall_at_1
value: 48.196
- type: recall_at_10
value: 83.885
- type: recall_at_100
value: 92.671
- type: recall_at_1000
value: 96.018
- type: recall_at_3
value: 71.59
- type: recall_at_5
value: 77.946
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.193000000000001
- type: map_at_10
value: 25.168000000000003
- type: map_at_100
value: 27.017000000000003
- type: map_at_1000
value: 27.205000000000002
- type: map_at_3
value: 21.746
- type: map_at_5
value: 23.579
- type: mrr_at_1
value: 31.635999999999996
- type: mrr_at_10
value: 40.077
- type: mrr_at_100
value: 41.112
- type: mrr_at_1000
value: 41.160999999999994
- type: mrr_at_3
value: 37.937
- type: mrr_at_5
value: 39.18
- type: ndcg_at_1
value: 31.635999999999996
- type: ndcg_at_10
value: 32.298
- type: ndcg_at_100
value: 39.546
- type: ndcg_at_1000
value: 42.88
- type: ndcg_at_3
value: 29.221999999999998
- type: ndcg_at_5
value: 30.069000000000003
- type: precision_at_1
value: 31.635999999999996
- type: precision_at_10
value: 9.367
- type: precision_at_100
value: 1.645
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 20.01
- type: precision_at_5
value: 14.753
- type: recall_at_1
value: 15.193000000000001
- type: recall_at_10
value: 38.214999999999996
- type: recall_at_100
value: 65.95
- type: recall_at_1000
value: 85.85300000000001
- type: recall_at_3
value: 26.357000000000003
- type: recall_at_5
value: 31.319999999999997
- task:
type: Retrieval
dataset:
name: MTEB GerDaLIR
type: jinaai/ger_da_lir
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.363
- type: map_at_10
value: 16.222
- type: map_at_100
value: 17.28
- type: map_at_1000
value: 17.380000000000003
- type: map_at_3
value: 14.054
- type: map_at_5
value: 15.203
- type: mrr_at_1
value: 11.644
- type: mrr_at_10
value: 17.625
- type: mrr_at_100
value: 18.608
- type: mrr_at_1000
value: 18.695999999999998
- type: mrr_at_3
value: 15.481
- type: mrr_at_5
value: 16.659
- type: ndcg_at_1
value: 11.628
- type: ndcg_at_10
value: 20.028000000000002
- type: ndcg_at_100
value: 25.505
- type: ndcg_at_1000
value: 28.288000000000004
- type: ndcg_at_3
value: 15.603
- type: ndcg_at_5
value: 17.642
- type: precision_at_1
value: 11.628
- type: precision_at_10
value: 3.5589999999999997
- type: precision_at_100
value: 0.664
- type: precision_at_1000
value: 0.092
- type: precision_at_3
value: 7.109999999999999
- type: precision_at_5
value: 5.401
- type: recall_at_1
value: 10.363
- type: recall_at_10
value: 30.586000000000002
- type: recall_at_100
value: 56.43
- type: recall_at_1000
value: 78.142
- type: recall_at_3
value: 18.651
- type: recall_at_5
value: 23.493
- task:
type: Retrieval
dataset:
name: MTEB GermanDPR
type: deepset/germandpr
config: default
split: test
revision: 5129d02422a66be600ac89cd3e8531b4f97d347d
metrics:
- type: map_at_1
value: 60.78
- type: map_at_10
value: 73.91499999999999
- type: map_at_100
value: 74.089
- type: map_at_1000
value: 74.09400000000001
- type: map_at_3
value: 71.87
- type: map_at_5
value: 73.37700000000001
- type: mrr_at_1
value: 60.78
- type: mrr_at_10
value: 73.91499999999999
- type: mrr_at_100
value: 74.089
- type: mrr_at_1000
value: 74.09400000000001
- type: mrr_at_3
value: 71.87
- type: mrr_at_5
value: 73.37700000000001
- type: ndcg_at_1
value: 60.78
- type: ndcg_at_10
value: 79.35600000000001
- type: ndcg_at_100
value: 80.077
- type: ndcg_at_1000
value: 80.203
- type: ndcg_at_3
value: 75.393
- type: ndcg_at_5
value: 78.077
- type: precision_at_1
value: 60.78
- type: precision_at_10
value: 9.59
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 28.52
- type: precision_at_5
value: 18.4
- type: recall_at_1
value: 60.78
- type: recall_at_10
value: 95.902
- type: recall_at_100
value: 99.024
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.56099999999999
- type: recall_at_5
value: 92.0
- task:
type: STS
dataset:
name: MTEB GermanSTSBenchmark
type: jinaai/german-STSbenchmark
config: default
split: test
revision: 49d9b423b996fea62b483f9ee6dfb5ec233515ca
metrics:
- type: cos_sim_pearson
value: 88.49524420894356
- type: cos_sim_spearman
value: 88.32407839427714
- type: euclidean_pearson
value: 87.25098779877104
- type: euclidean_spearman
value: 88.22738098593608
- type: manhattan_pearson
value: 87.23872691839607
- type: manhattan_spearman
value: 88.2002968380165
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.81
- type: map_at_10
value: 46.238
- type: map_at_100
value: 47.141
- type: map_at_1000
value: 47.213
- type: map_at_3
value: 43.248999999999995
- type: map_at_5
value: 45.078
- type: mrr_at_1
value: 63.619
- type: mrr_at_10
value: 71.279
- type: mrr_at_100
value: 71.648
- type: mrr_at_1000
value: 71.665
- type: mrr_at_3
value: 69.76599999999999
- type: mrr_at_5
value: 70.743
- type: ndcg_at_1
value: 63.619
- type: ndcg_at_10
value: 55.38999999999999
- type: ndcg_at_100
value: 58.80800000000001
- type: ndcg_at_1000
value: 60.331999999999994
- type: ndcg_at_3
value: 50.727
- type: ndcg_at_5
value: 53.284
- type: precision_at_1
value: 63.619
- type: precision_at_10
value: 11.668000000000001
- type: precision_at_100
value: 1.434
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 32.001000000000005
- type: precision_at_5
value: 21.223
- type: recall_at_1
value: 31.81
- type: recall_at_10
value: 58.339
- type: recall_at_100
value: 71.708
- type: recall_at_1000
value: 81.85
- type: recall_at_3
value: 48.001
- type: recall_at_5
value: 53.059
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 68.60640000000001
- type: ap
value: 62.84296904042086
- type: f1
value: 68.50643633327537
- task:
type: Reranking
dataset:
name: MTEB MIRACL
type: jinaai/miracl
config: default
split: test
revision: 8741c3b61cd36ed9ca1b3d4203543a41793239e2
metrics:
- type: map
value: 64.29704335389768
- type: mrr
value: 72.11962197159565
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.3844049247606
- type: f1
value: 89.2124328528015
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.36855452240067
- type: f1
value: 87.35458822097442
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 66.48654810761514
- type: f1
value: 50.07229882504409
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 63.832065370526905
- type: f1
value: 46.283579383385806
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.89038332212509
- type: f1
value: 61.86279849685129
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.11230665770006
- type: f1
value: 67.44780095350535
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.25084061869536
- type: f1
value: 71.43965023016408
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.73907195696032
- type: f1
value: 73.69920814839061
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.32577306498249
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.759349326367783
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.401342674703425
- type: mrr
value: 31.384379585660987
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.855
- type: map_at_10
value: 10.01
- type: map_at_100
value: 12.461
- type: map_at_1000
value: 13.776
- type: map_at_3
value: 7.252
- type: map_at_5
value: 8.679
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 49.323
- type: mrr_at_100
value: 49.954
- type: mrr_at_1000
value: 49.997
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.375
- type: ndcg_at_1
value: 39.318999999999996
- type: ndcg_at_10
value: 28.607
- type: ndcg_at_100
value: 26.554
- type: ndcg_at_1000
value: 35.731
- type: ndcg_at_3
value: 32.897999999999996
- type: ndcg_at_5
value: 31.53
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 20.867
- type: precision_at_100
value: 6.796
- type: precision_at_1000
value: 1.983
- type: precision_at_3
value: 30.547
- type: precision_at_5
value: 27.245
- type: recall_at_1
value: 4.855
- type: recall_at_10
value: 14.08
- type: recall_at_100
value: 28.188000000000002
- type: recall_at_1000
value: 60.07900000000001
- type: recall_at_3
value: 7.947
- type: recall_at_5
value: 10.786
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 26.906999999999996
- type: map_at_10
value: 41.147
- type: map_at_100
value: 42.269
- type: map_at_1000
value: 42.308
- type: map_at_3
value: 36.638999999999996
- type: map_at_5
value: 39.285
- type: mrr_at_1
value: 30.359
- type: mrr_at_10
value: 43.607
- type: mrr_at_100
value: 44.454
- type: mrr_at_1000
value: 44.481
- type: mrr_at_3
value: 39.644
- type: mrr_at_5
value: 42.061
- type: ndcg_at_1
value: 30.330000000000002
- type: ndcg_at_10
value: 48.899
- type: ndcg_at_100
value: 53.612
- type: ndcg_at_1000
value: 54.51200000000001
- type: ndcg_at_3
value: 40.262
- type: ndcg_at_5
value: 44.787
- type: precision_at_1
value: 30.330000000000002
- type: precision_at_10
value: 8.323
- type: precision_at_100
value: 1.0959999999999999
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 18.395
- type: precision_at_5
value: 13.627
- type: recall_at_1
value: 26.906999999999996
- type: recall_at_10
value: 70.215
- type: recall_at_100
value: 90.61200000000001
- type: recall_at_1000
value: 97.294
- type: recall_at_3
value: 47.784
- type: recall_at_5
value: 58.251
- task:
type: PairClassification
dataset:
name: MTEB PawsX
type: paws-x
config: default
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 60.5
- type: cos_sim_ap
value: 57.606096528877494
- type: cos_sim_f1
value: 62.24240307369892
- type: cos_sim_precision
value: 45.27439024390244
- type: cos_sim_recall
value: 99.55307262569832
- type: dot_accuracy
value: 57.699999999999996
- type: dot_ap
value: 51.289351057160616
- type: dot_f1
value: 62.25953130465197
- type: dot_precision
value: 45.31568228105906
- type: dot_recall
value: 99.4413407821229
- type: euclidean_accuracy
value: 60.45
- type: euclidean_ap
value: 57.616461421424034
- type: euclidean_f1
value: 62.313697657913416
- type: euclidean_precision
value: 45.657826313052524
- type: euclidean_recall
value: 98.10055865921787
- type: manhattan_accuracy
value: 60.3
- type: manhattan_ap
value: 57.580565271667325
- type: manhattan_f1
value: 62.24240307369892
- type: manhattan_precision
value: 45.27439024390244
- type: manhattan_recall
value: 99.55307262569832
- type: max_accuracy
value: 60.5
- type: max_ap
value: 57.616461421424034
- type: max_f1
value: 62.313697657913416
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.21300000000001
- type: map_at_10
value: 84.136
- type: map_at_100
value: 84.796
- type: map_at_1000
value: 84.812
- type: map_at_3
value: 81.182
- type: map_at_5
value: 83.027
- type: mrr_at_1
value: 80.91000000000001
- type: mrr_at_10
value: 87.155
- type: mrr_at_100
value: 87.27000000000001
- type: mrr_at_1000
value: 87.271
- type: mrr_at_3
value: 86.158
- type: mrr_at_5
value: 86.828
- type: ndcg_at_1
value: 80.88
- type: ndcg_at_10
value: 87.926
- type: ndcg_at_100
value: 89.223
- type: ndcg_at_1000
value: 89.321
- type: ndcg_at_3
value: 85.036
- type: ndcg_at_5
value: 86.614
- type: precision_at_1
value: 80.88
- type: precision_at_10
value: 13.350000000000001
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.173
- type: precision_at_5
value: 24.476
- type: recall_at_1
value: 70.21300000000001
- type: recall_at_10
value: 95.12
- type: recall_at_100
value: 99.535
- type: recall_at_1000
value: 99.977
- type: recall_at_3
value: 86.833
- type: recall_at_5
value: 91.26100000000001
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 47.754688783184875
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 54.875736374329364
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.773
- type: map_at_10
value: 9.447
- type: map_at_100
value: 11.1
- type: map_at_1000
value: 11.37
- type: map_at_3
value: 6.787
- type: map_at_5
value: 8.077
- type: mrr_at_1
value: 18.5
- type: mrr_at_10
value: 28.227000000000004
- type: mrr_at_100
value: 29.445
- type: mrr_at_1000
value: 29.515
- type: mrr_at_3
value: 25.2
- type: mrr_at_5
value: 27.055
- type: ndcg_at_1
value: 18.5
- type: ndcg_at_10
value: 16.29
- type: ndcg_at_100
value: 23.250999999999998
- type: ndcg_at_1000
value: 28.445999999999998
- type: ndcg_at_3
value: 15.376000000000001
- type: ndcg_at_5
value: 13.528
- type: precision_at_1
value: 18.5
- type: precision_at_10
value: 8.51
- type: precision_at_100
value: 1.855
- type: precision_at_1000
value: 0.311
- type: precision_at_3
value: 14.533
- type: precision_at_5
value: 12.0
- type: recall_at_1
value: 3.773
- type: recall_at_10
value: 17.282
- type: recall_at_100
value: 37.645
- type: recall_at_1000
value: 63.138000000000005
- type: recall_at_3
value: 8.853
- type: recall_at_5
value: 12.168
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.32789517976525
- type: cos_sim_spearman
value: 80.32750384145629
- type: euclidean_pearson
value: 81.5025131452508
- type: euclidean_spearman
value: 80.24797115147175
- type: manhattan_pearson
value: 81.51634463412002
- type: manhattan_spearman
value: 80.24614721495055
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 88.47050448992432
- type: cos_sim_spearman
value: 80.58919997743621
- type: euclidean_pearson
value: 85.83258918113664
- type: euclidean_spearman
value: 80.97441389240902
- type: manhattan_pearson
value: 85.7798262013878
- type: manhattan_spearman
value: 80.97208703064196
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.95341439711532
- type: cos_sim_spearman
value: 86.59127484634989
- type: euclidean_pearson
value: 85.57850603454227
- type: euclidean_spearman
value: 86.47130477363419
- type: manhattan_pearson
value: 85.59387925447652
- type: manhattan_spearman
value: 86.50665427391583
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.39810909161844
- type: cos_sim_spearman
value: 82.98595295546008
- type: euclidean_pearson
value: 84.04681129969951
- type: euclidean_spearman
value: 82.98197460689866
- type: manhattan_pearson
value: 83.9918798171185
- type: manhattan_spearman
value: 82.91148131768082
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.02072712147692
- type: cos_sim_spearman
value: 88.78821332623012
- type: euclidean_pearson
value: 88.12132045572747
- type: euclidean_spearman
value: 88.74273451067364
- type: manhattan_pearson
value: 88.05431550059166
- type: manhattan_spearman
value: 88.67610233020723
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.96134704624787
- type: cos_sim_spearman
value: 84.44062976314666
- type: euclidean_pearson
value: 84.03642536310323
- type: euclidean_spearman
value: 84.4535014579785
- type: manhattan_pearson
value: 83.92874228901483
- type: manhattan_spearman
value: 84.33634314951631
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.3154168064887
- type: cos_sim_spearman
value: 86.72393652571682
- type: euclidean_pearson
value: 86.04193246174164
- type: euclidean_spearman
value: 86.30482896608093
- type: manhattan_pearson
value: 85.95524084651859
- type: manhattan_spearman
value: 86.06031431994282
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.91079682750804
- type: cos_sim_spearman
value: 89.30961836617064
- type: euclidean_pearson
value: 88.86249564158628
- type: euclidean_spearman
value: 89.04772899592396
- type: manhattan_pearson
value: 88.85579791315043
- type: manhattan_spearman
value: 88.94190462541333
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.00558145551088
- type: cos_sim_spearman
value: 67.96601170393878
- type: euclidean_pearson
value: 67.87627043214336
- type: euclidean_spearman
value: 66.76402572303859
- type: manhattan_pearson
value: 67.88306560555452
- type: manhattan_spearman
value: 66.6273862035506
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 50.83759332748726
- type: cos_sim_spearman
value: 59.066344562858006
- type: euclidean_pearson
value: 50.08955848154131
- type: euclidean_spearman
value: 58.36517305855221
- type: manhattan_pearson
value: 50.05257267223111
- type: manhattan_spearman
value: 58.37570252804986
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.22749007956492
- type: cos_sim_spearman
value: 55.97282077657827
- type: euclidean_pearson
value: 62.10661533695752
- type: euclidean_spearman
value: 53.62780854854067
- type: manhattan_pearson
value: 62.37138085709719
- type: manhattan_spearman
value: 54.17556356828155
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.91145397065878
- type: cos_sim_spearman
value: 88.13960018389005
- type: euclidean_pearson
value: 87.67618876224006
- type: euclidean_spearman
value: 87.99119480810556
- type: manhattan_pearson
value: 87.67920297334753
- type: manhattan_spearman
value: 87.99113250064492
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.09133563707582
- type: mrr
value: 93.2415288052543
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 47.760999999999996
- type: map_at_10
value: 56.424
- type: map_at_100
value: 57.24399999999999
- type: map_at_1000
value: 57.278
- type: map_at_3
value: 53.68000000000001
- type: map_at_5
value: 55.442
- type: mrr_at_1
value: 50.666999999999994
- type: mrr_at_10
value: 58.012
- type: mrr_at_100
value: 58.736
- type: mrr_at_1000
value: 58.769000000000005
- type: mrr_at_3
value: 56.056
- type: mrr_at_5
value: 57.321999999999996
- type: ndcg_at_1
value: 50.666999999999994
- type: ndcg_at_10
value: 60.67700000000001
- type: ndcg_at_100
value: 64.513
- type: ndcg_at_1000
value: 65.62400000000001
- type: ndcg_at_3
value: 56.186
- type: ndcg_at_5
value: 58.692
- type: precision_at_1
value: 50.666999999999994
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 1.023
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 21.889
- type: precision_at_5
value: 14.866999999999999
- type: recall_at_1
value: 47.760999999999996
- type: recall_at_10
value: 72.006
- type: recall_at_100
value: 89.767
- type: recall_at_1000
value: 98.833
- type: recall_at_3
value: 60.211000000000006
- type: recall_at_5
value: 66.3
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 94.86690691995835
- type: cos_sim_f1
value: 89.37875751503007
- type: cos_sim_precision
value: 89.5582329317269
- type: cos_sim_recall
value: 89.2
- type: dot_accuracy
value: 99.76336633663367
- type: dot_ap
value: 94.26453740761586
- type: dot_f1
value: 88.00783162016641
- type: dot_precision
value: 86.19367209971237
- type: dot_recall
value: 89.9
- type: euclidean_accuracy
value: 99.7940594059406
- type: euclidean_ap
value: 94.85459757524379
- type: euclidean_f1
value: 89.62779156327544
- type: euclidean_precision
value: 88.96551724137932
- type: euclidean_recall
value: 90.3
- type: manhattan_accuracy
value: 99.79009900990098
- type: manhattan_ap
value: 94.76971336654465
- type: manhattan_f1
value: 89.35323383084577
- type: manhattan_precision
value: 88.91089108910892
- type: manhattan_recall
value: 89.8
- type: max_accuracy
value: 99.7940594059406
- type: max_ap
value: 94.86690691995835
- type: max_f1
value: 89.62779156327544
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 55.38197670064987
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.08330158937971
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.50367079063226
- type: mrr
value: 50.30444943128768
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.37739520909561
- type: cos_sim_spearman
value: 31.548500943973913
- type: dot_pearson
value: 29.983610104303
- type: dot_spearman
value: 29.90185869098618
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.5810000000000002
- type: map_at_100
value: 9.064
- type: map_at_1000
value: 22.161
- type: map_at_3
value: 0.536
- type: map_at_5
value: 0.8370000000000001
- type: mrr_at_1
value: 80.0
- type: mrr_at_10
value: 86.75
- type: mrr_at_100
value: 86.799
- type: mrr_at_1000
value: 86.799
- type: mrr_at_3
value: 85.0
- type: mrr_at_5
value: 86.5
- type: ndcg_at_1
value: 73.0
- type: ndcg_at_10
value: 65.122
- type: ndcg_at_100
value: 51.853
- type: ndcg_at_1000
value: 47.275
- type: ndcg_at_3
value: 66.274
- type: ndcg_at_5
value: 64.826
- type: precision_at_1
value: 80.0
- type: precision_at_10
value: 70.19999999999999
- type: precision_at_100
value: 53.480000000000004
- type: precision_at_1000
value: 20.946
- type: precision_at_3
value: 71.333
- type: precision_at_5
value: 70.0
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.884
- type: recall_at_100
value: 12.57
- type: recall_at_1000
value: 44.208999999999996
- type: recall_at_3
value: 0.5890000000000001
- type: recall_at_5
value: 0.95
- task:
type: Clustering
dataset:
name: MTEB TenKGnadClusteringP2P
type: slvnwhrl/tenkgnad-clustering-p2p
config: default
split: test
revision: 5c59e41555244b7e45c9a6be2d720ab4bafae558
metrics:
- type: v_measure
value: 42.84199261133083
- task:
type: Clustering
dataset:
name: MTEB TenKGnadClusteringS2S
type: slvnwhrl/tenkgnad-clustering-s2s
config: default
split: test
revision: 6cddbe003f12b9b140aec477b583ac4191f01786
metrics:
- type: v_measure
value: 23.689557114798838
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.941
- type: map_at_10
value: 8.222
- type: map_at_100
value: 14.277999999999999
- type: map_at_1000
value: 15.790000000000001
- type: map_at_3
value: 4.4670000000000005
- type: map_at_5
value: 5.762
- type: mrr_at_1
value: 24.490000000000002
- type: mrr_at_10
value: 38.784
- type: mrr_at_100
value: 39.724
- type: mrr_at_1000
value: 39.724
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 37.415
- type: ndcg_at_1
value: 22.448999999999998
- type: ndcg_at_10
value: 21.026
- type: ndcg_at_100
value: 33.721000000000004
- type: ndcg_at_1000
value: 45.045
- type: ndcg_at_3
value: 20.053
- type: ndcg_at_5
value: 20.09
- type: precision_at_1
value: 24.490000000000002
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.469
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 21.769
- type: precision_at_5
value: 21.224
- type: recall_at_1
value: 1.941
- type: recall_at_10
value: 14.915999999999999
- type: recall_at_100
value: 46.155
- type: recall_at_1000
value: 80.664
- type: recall_at_3
value: 5.629
- type: recall_at_5
value: 8.437
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.64800000000001
- type: ap
value: 12.914826731261094
- type: f1
value: 53.05213503422915
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.427277872099594
- type: f1
value: 60.78292007556828
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 40.48134168406559
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.79465935506944
- type: cos_sim_ap
value: 70.24589055290592
- type: cos_sim_f1
value: 65.0994575045208
- type: cos_sim_precision
value: 63.76518218623482
- type: cos_sim_recall
value: 66.49076517150397
- type: dot_accuracy
value: 84.63968528342374
- type: dot_ap
value: 69.84683095084355
- type: dot_f1
value: 64.50606169727523
- type: dot_precision
value: 59.1719885487778
- type: dot_recall
value: 70.89709762532982
- type: euclidean_accuracy
value: 84.76485664898374
- type: euclidean_ap
value: 70.20556438685551
- type: euclidean_f1
value: 65.06796614516543
- type: euclidean_precision
value: 63.29840319361277
- type: euclidean_recall
value: 66.93931398416886
- type: manhattan_accuracy
value: 84.72313286046374
- type: manhattan_ap
value: 70.17151475534308
- type: manhattan_f1
value: 65.31379180759113
- type: manhattan_precision
value: 62.17505366086334
- type: manhattan_recall
value: 68.7862796833773
- type: max_accuracy
value: 84.79465935506944
- type: max_ap
value: 70.24589055290592
- type: max_f1
value: 65.31379180759113
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.95874568246207
- type: cos_sim_ap
value: 85.82517548264127
- type: cos_sim_f1
value: 78.22288041466125
- type: cos_sim_precision
value: 75.33875338753387
- type: cos_sim_recall
value: 81.33661841700031
- type: dot_accuracy
value: 88.836496293709
- type: dot_ap
value: 85.53430720252186
- type: dot_f1
value: 78.10616085869725
- type: dot_precision
value: 74.73269555430501
- type: dot_recall
value: 81.79858330766862
- type: euclidean_accuracy
value: 88.92769821865176
- type: euclidean_ap
value: 85.65904346964223
- type: euclidean_f1
value: 77.98774074208407
- type: euclidean_precision
value: 73.72282795035315
- type: euclidean_recall
value: 82.77640899291654
- type: manhattan_accuracy
value: 88.86366282454303
- type: manhattan_ap
value: 85.61599642231819
- type: manhattan_f1
value: 78.01480509061737
- type: manhattan_precision
value: 74.10460685833044
- type: manhattan_recall
value: 82.36064059131506
- type: max_accuracy
value: 88.95874568246207
- type: max_ap
value: 85.82517548264127
- type: max_f1
value: 78.22288041466125
- task:
type: Retrieval
dataset:
name: MTEB WikiCLIR
type: None
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.9539999999999997
- type: map_at_10
value: 7.407
- type: map_at_100
value: 8.677999999999999
- type: map_at_1000
value: 9.077
- type: map_at_3
value: 5.987
- type: map_at_5
value: 6.6979999999999995
- type: mrr_at_1
value: 35.65
- type: mrr_at_10
value: 45.097
- type: mrr_at_100
value: 45.83
- type: mrr_at_1000
value: 45.871
- type: mrr_at_3
value: 42.63
- type: mrr_at_5
value: 44.104
- type: ndcg_at_1
value: 29.215000000000003
- type: ndcg_at_10
value: 22.694
- type: ndcg_at_100
value: 22.242
- type: ndcg_at_1000
value: 27.069
- type: ndcg_at_3
value: 27.641
- type: ndcg_at_5
value: 25.503999999999998
- type: precision_at_1
value: 35.65
- type: precision_at_10
value: 12.795000000000002
- type: precision_at_100
value: 3.354
- type: precision_at_1000
value: 0.743
- type: precision_at_3
value: 23.403
- type: precision_at_5
value: 18.474
- type: recall_at_1
value: 3.9539999999999997
- type: recall_at_10
value: 11.301
- type: recall_at_100
value: 22.919999999999998
- type: recall_at_1000
value: 40.146
- type: recall_at_3
value: 7.146
- type: recall_at_5
value: 8.844000000000001
- task:
type: Retrieval
dataset:
name: MTEB XMarket
type: jinaai/xmarket_de
config: default
split: test
revision: 2336818db4c06570fcdf263e1bcb9993b786f67a
metrics:
- type: map_at_1
value: 4.872
- type: map_at_10
value: 10.658
- type: map_at_100
value: 13.422999999999998
- type: map_at_1000
value: 14.245
- type: map_at_3
value: 7.857
- type: map_at_5
value: 9.142999999999999
- type: mrr_at_1
value: 16.744999999999997
- type: mrr_at_10
value: 24.416
- type: mrr_at_100
value: 25.432
- type: mrr_at_1000
value: 25.502999999999997
- type: mrr_at_3
value: 22.096
- type: mrr_at_5
value: 23.421
- type: ndcg_at_1
value: 16.695999999999998
- type: ndcg_at_10
value: 18.66
- type: ndcg_at_100
value: 24.314
- type: ndcg_at_1000
value: 29.846
- type: ndcg_at_3
value: 17.041999999999998
- type: ndcg_at_5
value: 17.585
- type: precision_at_1
value: 16.695999999999998
- type: precision_at_10
value: 10.374
- type: precision_at_100
value: 3.988
- type: precision_at_1000
value: 1.1860000000000002
- type: precision_at_3
value: 14.21
- type: precision_at_5
value: 12.623000000000001
- type: recall_at_1
value: 4.872
- type: recall_at_10
value: 18.624
- type: recall_at_100
value: 40.988
- type: recall_at_1000
value: 65.33
- type: recall_at_3
value: 10.162
- type: recall_at_5
value: 13.517999999999999
---
<!-- TODO: add evaluation results here -->
<br><br>
<p align="center">
<img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
</p>
<p align="center">
<b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
</p>
## Quick Start
The easiest way to starting using `jina-embeddings-v2-base-de` is to use Jina AI's [Embedding API](https://jina.ai/embeddings/).
## Intended Usage & Model Info
`jina-embeddings-v2-base-de` is a German/English bilingual text **embedding model** supporting **8192 sequence length**.
It is based on a BERT architecture (JinaBERT) that supports the symmetric bidirectional variant of [ALiBi](https://arxiv.org/abs/2108.12409) to allow longer sequence length.
We have designed it for high performance in mono-lingual & cross-lingual applications and trained it specifically to support mixed German-English input without bias.
Additionally, we provide the following embedding models:
`jina-embeddings-v2-base-de` ist ein zweisprachiges **Text Embedding Modell** für Deutsch und Englisch,
welches Texteingaben mit einer Länge von bis zu **8192 Token unterstützt**.
Es basiert auf der adaptierten Bert-Modell-Architektur JinaBERT,
welche mithilfe einer symmetrische Variante von [ALiBi](https://arxiv.org/abs/2108.12409) längere Eingabetexte erlaubt.
Wir haben, das Model für hohe Performance in einsprachigen und cross-lingual Anwendungen entwickelt und speziell darauf trainiert,
gemischte deutsch-englische Eingaben ohne einen Bias zu kodieren.
Des Weiteren stellen wir folgende Embedding-Modelle bereit:
- [`jina-embeddings-v2-small-en`](https://huggingface.co/jinaai/jina-embeddings-v2-small-en): 33 million parameters.
- [`jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en): 137 million parameters.
- [`jina-embeddings-v2-base-zh`](https://huggingface.co/jinaai/jina-embeddings-v2-base-zh): 161 million parameters Chinese-English Bilingual embeddings.
- [`jina-embeddings-v2-base-de`](https://huggingface.co/jinaai/jina-embeddings-v2-base-de): 161 million parameters German-English Bilingual embeddings **(you are here)**.
- [`jina-embeddings-v2-base-es`](): Spanish-English Bilingual embeddings (soon).
- [`jina-embeddings-v2-base-code`](https://huggingface.co/jinaai/jina-embeddings-v2-base-code): 161 million parameters code embeddings.
## Data & Parameters
The data and training details are described in this [technical report](https://arxiv.org/abs/2402.17016).
## Usage
**<details><summary>Please apply mean pooling when integrating the model.</summary>**
<p>
### Why mean pooling?
`mean poooling` takes all token embeddings from model output and averaging them at sentence/paragraph level.
It has been proved to be the most effective way to produce high-quality sentence embeddings.
We offer an `encode` function to deal with this.
However, if you would like to do it without using the default `encode` function:
```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['How is the weather today?', 'What is the current weather like today?']
tokenizer = AutoTokenizer.from_pretrained('jinaai/jina-embeddings-v2-base-de')
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16)
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
embeddings = F.normalize(embeddings, p=2, dim=1)
```
</p>
</details>
You can use Jina Embedding models directly from transformers package.
```python
!pip install transformers
import torch
from transformers import AutoModel
from numpy.linalg import norm
cos_sim = lambda a,b: (a @ b.T) / (norm(a)*norm(b))
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-de', trust_remote_code=True, torch_dtype=torch.bfloat16)
embeddings = model.encode(['How is the weather today?', 'Wie ist das Wetter heute?'])
print(cos_sim(embeddings[0], embeddings[1]))
```
If you only want to handle shorter sequence, such as 2k, pass the `max_length` parameter to the `encode` function:
```python
embeddings = model.encode(
['Very long ... document'],
max_length=2048
)
```
Using the its latest release (v2.3.0) sentence-transformers also supports Jina embeddings (Please make sure that you are logged into huggingface as well):
```python
!pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
"jinaai/jina-embeddings-v2-base-de", # switch to en/zh for English or Chinese
trust_remote_code=True
)
# control your input sequence length up to 8192
model.max_seq_length = 1024
embeddings = model.encode([
'How is the weather today?',
'Wie ist das Wetter heute?'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
## Alternatives to Using Transformers Package
1. _Managed SaaS_: Get started with a free key on Jina AI's [Embedding API](https://jina.ai/embeddings/).
2. _Private and high-performance deployment_: Get started by picking from our suite of models and deploy them on [AWS Sagemaker](https://aws.amazon.com/marketplace/seller-profile?id=seller-stch2ludm6vgy).
## Benchmark Results
We evaluated our Bilingual model on all German and English evaluation tasks availble on the [MTEB benchmark](https://huggingface.co/blog/mteb). In addition, we evaluated the models agains a couple of other German, English, and multilingual models on additional German evaluation tasks:
<img src="de_evaluation_results.png" width="780px">
## Use Jina Embeddings for RAG
According to the latest blog post from [LLamaIndex](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83),
> In summary, to achieve the peak performance in both hit rate and MRR, the combination of OpenAI or JinaAI-Base embeddings with the CohereRerank/bge-reranker-large reranker stands out.
<img src="https://miro.medium.com/v2/resize:fit:4800/format:webp/1*ZP2RVejCZovF3FDCg-Bx3A.png" width="780px">
## Contact
Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas.
## Citation
If you find Jina Embeddings useful in your research, please cite the following paper:
```
@article{mohr2024multi,
title={Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings},
author={Mohr, Isabelle and Krimmel, Markus and Sturua, Saba and Akram, Mohammad Kalim and Koukounas, Andreas and G{\"u}nther, Michael and Mastrapas, Georgios and Ravishankar, Vinit and Mart{\'\i}nez, Joan Fontanals and Wang, Feng and others},
journal={arXiv preprint arXiv:2402.17016},
year={2024}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF | Hoshino-Yumetsuki | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,741 | 1,741 | 41 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.511868162026175
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 45.007803189284004
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.20754608934859
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 38.818037697335505
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 39.386760057101945
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.89687154075537
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82153952668092
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.094465801879295
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.65446577183913
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 46.30749237193961
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.581627240203474
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 45.21317724305628
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 42.49825170976724
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 68.23769904483508
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 62.50294403136556
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 44.594104491193555
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 70.55290063940157
- type: v_measure
value: 55.41500719337263
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 28.301882091023288
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 45.26992995191701
- type: v_measure
value: 42.773174876871145
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 71.04138999801822
- type: v_measure
value: 71.7056263158008
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
---
# Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-1.5B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
NovaSearch/jasper_en_vision_language_v1 | NovaSearch | null | [
"sentence-transformers",
"safetensors",
"jasper_vl",
"mteb",
"custom_code",
"en",
"dataset:BAAI/Infinity-MM",
"dataset:HuggingFaceFW/fineweb-edu",
"arxiv:2412.19048",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:finetune:NovaSearch/stella_en_1.5B_v5",
"model-index",
"region:us"
] | 1,733 | 1,737 | 10,537 | 46 | ---
base_model:
- dunzhang/stella_en_1.5B_v5
- google/siglip-so400m-patch14-384
datasets:
- BAAI/Infinity-MM
- HuggingFaceFW/fineweb-edu
language:
- en
tags:
- mteb
- sentence-transformers
model-index:
- name: jasper_en_vision_language_v1
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 95.7271
- type: f1
value: 89.25450000000001
- type: f1_weighted
value: 95.8563
- type: ap
value: 67.1563
- type: ap_weighted
value: 67.1563
- type: main_score
value: 95.7271
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 93.7761
- type: f1
value: 90.7582
- type: f1_weighted
value: 93.974
- type: ap
value: 74.88759999999999
- type: ap_weighted
value: 74.88759999999999
- type: main_score
value: 93.7761
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.5809
- type: f1
value: 97.5808
- type: f1_weighted
value: 97.5808
- type: ap
value: 96.3911
- type: ap_weighted
value: 96.3911
- type: main_score
value: 97.5809
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.918
- type: f1
value: 60.696099999999994
- type: f1_weighted
value: 60.696099999999994
- type: main_score
value: 62.918
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: ndcg_at_1
value: 41.323
- type: ndcg_at_3
value: 56.737
- type: ndcg_at_5
value: 61.254
- type: ndcg_at_10
value: 65.204
- type: ndcg_at_20
value: 66.75
- type: ndcg_at_100
value: 67.254
- type: ndcg_at_1000
value: 67.254
- type: map_at_1
value: 41.323
- type: map_at_3
value: 52.881
- type: map_at_5
value: 55.373000000000005
- type: map_at_10
value: 57.013999999999996
- type: map_at_20
value: 57.457
- type: map_at_100
value: 57.538
- type: map_at_1000
value: 57.538
- type: recall_at_1
value: 41.323
- type: recall_at_3
value: 67.923
- type: recall_at_5
value: 78.947
- type: recall_at_10
value: 91.11
- type: recall_at_20
value: 97.084
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: precision_at_1
value: 41.323
- type: precision_at_3
value: 22.641
- type: precision_at_5
value: 15.789
- type: precision_at_10
value: 9.110999999999999
- type: precision_at_20
value: 4.854
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 41.6785
- type: mrr_at_3
value: 52.963499999999996
- type: mrr_at_5
value: 55.456399999999995
- type: mrr_at_10
value: 57.104299999999995
- type: mrr_at_20
value: 57.5402
- type: mrr_at_100
value: 57.6213
- type: mrr_at_1000
value: 57.6213
- type: nauc_ndcg_at_1_max
value: -12.633600000000001
- type: nauc_ndcg_at_1_std
value: -19.747700000000002
- type: nauc_ndcg_at_1_diff1
value: 25.820300000000003
- type: nauc_ndcg_at_3_max
value: -8.480799999999999
- type: nauc_ndcg_at_3_std
value: -18.928900000000002
- type: nauc_ndcg_at_3_diff1
value: 19.409499999999998
- type: nauc_ndcg_at_5_max
value: -7.9172
- type: nauc_ndcg_at_5_std
value: -19.2306
- type: nauc_ndcg_at_5_diff1
value: 18.5809
- type: nauc_ndcg_at_10_max
value: -9.7812
- type: nauc_ndcg_at_10_std
value: -19.9964
- type: nauc_ndcg_at_10_diff1
value: 18.0753
- type: nauc_ndcg_at_20_max
value: -8.6546
- type: nauc_ndcg_at_20_std
value: -19.3205
- type: nauc_ndcg_at_20_diff1
value: 20.1741
- type: nauc_ndcg_at_100_max
value: -9.376900000000001
- type: nauc_ndcg_at_100_std
value: -19.5093
- type: nauc_ndcg_at_100_diff1
value: 20.4997
- type: nauc_ndcg_at_1000_max
value: -9.376900000000001
- type: nauc_ndcg_at_1000_std
value: -19.5093
- type: nauc_ndcg_at_1000_diff1
value: 20.4997
- type: nauc_map_at_1_max
value: -12.633600000000001
- type: nauc_map_at_1_std
value: -19.747700000000002
- type: nauc_map_at_1_diff1
value: 25.820300000000003
- type: nauc_map_at_3_max
value: -9.5533
- type: nauc_map_at_3_std
value: -19.3235
- type: nauc_map_at_3_diff1
value: 20.9083
- type: nauc_map_at_5_max
value: -9.3126
- type: nauc_map_at_5_std
value: -19.4593
- type: nauc_map_at_5_diff1
value: 20.5718
- type: nauc_map_at_10_max
value: -10.0766
- type: nauc_map_at_10_std
value: -19.8322
- type: nauc_map_at_10_diff1
value: 20.484099999999998
- type: nauc_map_at_20_max
value: -9.7917
- type: nauc_map_at_20_std
value: -19.6759
- type: nauc_map_at_20_diff1
value: 21.0095
- type: nauc_map_at_100_max
value: -9.8837
- type: nauc_map_at_100_std
value: -19.741400000000002
- type: nauc_map_at_100_diff1
value: 21.067700000000002
- type: nauc_map_at_1000_max
value: -9.8837
- type: nauc_map_at_1000_std
value: -19.741400000000002
- type: nauc_map_at_1000_diff1
value: 21.067700000000002
- type: nauc_recall_at_1_max
value: -12.633600000000001
- type: nauc_recall_at_1_std
value: -19.747700000000002
- type: nauc_recall_at_1_diff1
value: 25.820300000000003
- type: nauc_recall_at_3_max
value: -4.7886999999999995
- type: nauc_recall_at_3_std
value: -17.4764
- type: nauc_recall_at_3_diff1
value: 14.3308
- type: nauc_recall_at_5_max
value: -1.4803
- type: nauc_recall_at_5_std
value: -18.2137
- type: nauc_recall_at_5_diff1
value: 9.4604
- type: nauc_recall_at_10_max
value: -8.8102
- type: nauc_recall_at_10_std
value: -21.8279
- type: nauc_recall_at_10_diff1
value: -5.5388
- type: nauc_recall_at_20_max
value: 25.202400000000004
- type: nauc_recall_at_20_std
value: -4.8613
- type: nauc_recall_at_20_diff1
value: 9.3917
- type: nauc_recall_at_100_max
value: 37.0551
- type: nauc_recall_at_100_std
value: 84.5472
- type: nauc_recall_at_100_diff1
value: 19.5116
- type: nauc_recall_at_1000_max
value: 37.0551
- type: nauc_recall_at_1000_std
value: 84.5472
- type: nauc_recall_at_1000_diff1
value: 19.5116
- type: nauc_precision_at_1_max
value: -12.633600000000001
- type: nauc_precision_at_1_std
value: -19.747700000000002
- type: nauc_precision_at_1_diff1
value: 25.820300000000003
- type: nauc_precision_at_3_max
value: -4.7886999999999995
- type: nauc_precision_at_3_std
value: -17.4764
- type: nauc_precision_at_3_diff1
value: 14.3308
- type: nauc_precision_at_5_max
value: -1.4803
- type: nauc_precision_at_5_std
value: -18.2137
- type: nauc_precision_at_5_diff1
value: 9.4604
- type: nauc_precision_at_10_max
value: -8.8102
- type: nauc_precision_at_10_std
value: -21.8279
- type: nauc_precision_at_10_diff1
value: -5.5388
- type: nauc_precision_at_20_max
value: 25.202400000000004
- type: nauc_precision_at_20_std
value: -4.8613
- type: nauc_precision_at_20_diff1
value: 9.3917
- type: nauc_precision_at_100_max
value: 37.0551
- type: nauc_precision_at_100_std
value: 84.5472
- type: nauc_precision_at_100_diff1
value: 19.5116
- type: nauc_precision_at_1000_max
value: 37.0551
- type: nauc_precision_at_1000_std
value: 84.5472
- type: nauc_precision_at_1000_diff1
value: 19.5116
- type: nauc_mrr_at_1_max
value: -11.9728
- type: nauc_mrr_at_1_std
value: -19.4014
- type: nauc_mrr_at_1_diff1
value: 24.8653
- type: nauc_mrr_at_3_max
value: -9.6607
- type: nauc_mrr_at_3_std
value: -19.1819
- type: nauc_mrr_at_3_diff1
value: 20.0205
- type: nauc_mrr_at_5_max
value: -9.4261
- type: nauc_mrr_at_5_std
value: -19.3098
- type: nauc_mrr_at_5_diff1
value: 19.6347
- type: nauc_mrr_at_10_max
value: -10.1698
- type: nauc_mrr_at_10_std
value: -19.683
- type: nauc_mrr_at_10_diff1
value: 19.4823
- type: nauc_mrr_at_20_max
value: -9.913
- type: nauc_mrr_at_20_std
value: -19.517300000000002
- type: nauc_mrr_at_20_diff1
value: 20.0066
- type: nauc_mrr_at_100_max
value: -10.0053
- type: nauc_mrr_at_100_std
value: -19.5824
- type: nauc_mrr_at_100_diff1
value: 20.061899999999998
- type: nauc_mrr_at_1000_max
value: -10.0053
- type: nauc_mrr_at_1000_std
value: -19.5824
- type: nauc_mrr_at_1000_diff1
value: 20.061899999999998
- type: main_score
value: 65.204
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 55.710300000000004
- type: v_measure_std
value: 14.551900000000002
- type: main_score
value: 55.710300000000004
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.004599999999996
- type: v_measure_std
value: 14.868899999999998
- type: main_score
value: 51.004599999999996
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.38119999999999
- type: mrr
value: 79.0692
- type: nAUC_map_max
value: 21.9757
- type: nAUC_map_std
value: 14.3545
- type: nAUC_map_diff1
value: 11.82
- type: nAUC_mrr_max
value: 40.3449
- type: nAUC_mrr_std
value: 18.2678
- type: nAUC_mrr_diff1
value: 21.1596
- type: main_score
value: 67.38119999999999
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: pearson
value: 87.4332
- type: spearman
value: 84.65979999999999
- type: cosine_pearson
value: 87.4332
- type: cosine_spearman
value: 84.65979999999999
- type: manhattan_pearson
value: 86.37270000000001
- type: manhattan_spearman
value: 85.20309999999999
- type: euclidean_pearson
value: 86.1672
- type: euclidean_spearman
value: 84.7453
- type: main_score
value: 84.65979999999999
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.224
- type: f1
value: 86.7235
- type: f1_weighted
value: 86.7235
- type: main_score
value: 87.224
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 52.04990000000001
- type: v_measure_std
value: 0.7255
- type: main_score
value: 52.04990000000001
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 47.5811
- type: v_measure_std
value: 0.8049000000000001
- type: main_score
value: 47.5811
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: ndcg_at_1
value: 46.781
- type: ndcg_at_3
value: 52.111
- type: ndcg_at_5
value: 54.371
- type: ndcg_at_10
value: 57.247
- type: ndcg_at_20
value: 59.122
- type: ndcg_at_100
value: 62.071
- type: ndcg_at_1000
value: 63.32
- type: map_at_1
value: 37.2
- type: map_at_3
value: 46.005
- type: map_at_5
value: 48.32
- type: map_at_10
value: 50.181
- type: map_at_20
value: 51.062
- type: map_at_100
value: 51.735
- type: map_at_1000
value: 51.842
- type: recall_at_1
value: 37.2
- type: recall_at_3
value: 53.94
- type: recall_at_5
value: 60.88
- type: recall_at_10
value: 69.692
- type: recall_at_20
value: 76.711
- type: recall_at_100
value: 90.263
- type: recall_at_1000
value: 97.749
- type: precision_at_1
value: 46.781
- type: precision_at_3
value: 25.894000000000002
- type: precision_at_5
value: 18.541
- type: precision_at_10
value: 11.402
- type: precision_at_20
value: 6.567
- type: precision_at_100
value: 1.704
- type: precision_at_1000
value: 0.21
- type: mrr_at_1
value: 46.781099999999995
- type: mrr_at_3
value: 54.6257
- type: mrr_at_5
value: 56.0992
- type: mrr_at_10
value: 57.00769999999999
- type: mrr_at_20
value: 57.3576
- type: mrr_at_100
value: 57.6118
- type: mrr_at_1000
value: 57.6415
- type: nauc_ndcg_at_1_max
value: 41.3069
- type: nauc_ndcg_at_1_std
value: -3.9859999999999998
- type: nauc_ndcg_at_1_diff1
value: 56.355900000000005
- type: nauc_ndcg_at_3_max
value: 40.354299999999995
- type: nauc_ndcg_at_3_std
value: -1.4893
- type: nauc_ndcg_at_3_diff1
value: 51.473800000000004
- type: nauc_ndcg_at_5_max
value: 39.2091
- type: nauc_ndcg_at_5_std
value: -2.6139
- type: nauc_ndcg_at_5_diff1
value: 50.4382
- type: nauc_ndcg_at_10_max
value: 37.8136
- type: nauc_ndcg_at_10_std
value: -1.9053
- type: nauc_ndcg_at_10_diff1
value: 49.677
- type: nauc_ndcg_at_20_max
value: 37.7571
- type: nauc_ndcg_at_20_std
value: -1.0221
- type: nauc_ndcg_at_20_diff1
value: 49.6703
- type: nauc_ndcg_at_100_max
value: 39.0419
- type: nauc_ndcg_at_100_std
value: 0.5525
- type: nauc_ndcg_at_100_diff1
value: 50.8714
- type: nauc_ndcg_at_1000_max
value: 39.4123
- type: nauc_ndcg_at_1000_std
value: 0.2088
- type: nauc_ndcg_at_1000_diff1
value: 51.0321
- type: nauc_map_at_1_max
value: 31.1237
- type: nauc_map_at_1_std
value: -7.0686
- type: nauc_map_at_1_diff1
value: 55.28189999999999
- type: nauc_map_at_3_max
value: 36.750899999999994
- type: nauc_map_at_3_std
value: -4.502599999999999
- type: nauc_map_at_3_diff1
value: 52.64640000000001
- type: nauc_map_at_5_max
value: 37.4208
- type: nauc_map_at_5_std
value: -4.3387
- type: nauc_map_at_5_diff1
value: 51.8133
- type: nauc_map_at_10_max
value: 37.4829
- type: nauc_map_at_10_std
value: -3.3794999999999997
- type: nauc_map_at_10_diff1
value: 51.640299999999996
- type: nauc_map_at_20_max
value: 37.7883
- type: nauc_map_at_20_std
value: -2.9455999999999998
- type: nauc_map_at_20_diff1
value: 51.635299999999994
- type: nauc_map_at_100_max
value: 38.1532
- type: nauc_map_at_100_std
value: -2.513
- type: nauc_map_at_100_diff1
value: 51.8036
- type: nauc_map_at_1000_max
value: 38.2036
- type: nauc_map_at_1000_std
value: -2.5201000000000002
- type: nauc_map_at_1000_diff1
value: 51.807
- type: nauc_recall_at_1_max
value: 31.1237
- type: nauc_recall_at_1_std
value: -7.0686
- type: nauc_recall_at_1_diff1
value: 55.28189999999999
- type: nauc_recall_at_3_max
value: 33.942899999999995
- type: nauc_recall_at_3_std
value: -2.1842
- type: nauc_recall_at_3_diff1
value: 46.806
- type: nauc_recall_at_5_max
value: 32.935199999999995
- type: nauc_recall_at_5_std
value: -3.6494999999999997
- type: nauc_recall_at_5_diff1
value: 43.453599999999994
- type: nauc_recall_at_10_max
value: 28.2544
- type: nauc_recall_at_10_std
value: -1.1788
- type: nauc_recall_at_10_diff1
value: 38.8916
- type: nauc_recall_at_20_max
value: 27.1235
- type: nauc_recall_at_20_std
value: 2.8238
- type: nauc_recall_at_20_diff1
value: 36.9813
- type: nauc_recall_at_100_max
value: 31.899300000000004
- type: nauc_recall_at_100_std
value: 22.3521
- type: nauc_recall_at_100_diff1
value: 43.0867
- type: nauc_recall_at_1000_max
value: 45.7236
- type: nauc_recall_at_1000_std
value: 63.258199999999995
- type: nauc_recall_at_1000_diff1
value: 56.854499999999994
- type: nauc_precision_at_1_max
value: 41.3069
- type: nauc_precision_at_1_std
value: -3.9859999999999998
- type: nauc_precision_at_1_diff1
value: 56.355900000000005
- type: nauc_precision_at_3_max
value: 39.6888
- type: nauc_precision_at_3_std
value: 5.7427
- type: nauc_precision_at_3_diff1
value: 26.694699999999997
- type: nauc_precision_at_5_max
value: 34.3509
- type: nauc_precision_at_5_std
value: 7.3069999999999995
- type: nauc_precision_at_5_diff1
value: 15.4004
- type: nauc_precision_at_10_max
value: 23.2221
- type: nauc_precision_at_10_std
value: 10.559899999999999
- type: nauc_precision_at_10_diff1
value: 3.6925
- type: nauc_precision_at_20_max
value: 17.9697
- type: nauc_precision_at_20_std
value: 11.9127
- type: nauc_precision_at_20_diff1
value: -2.7178
- type: nauc_precision_at_100_max
value: 11.8537
- type: nauc_precision_at_100_std
value: 11.442
- type: nauc_precision_at_100_diff1
value: -11.2562
- type: nauc_precision_at_1000_max
value: 5.7549
- type: nauc_precision_at_1000_std
value: 0.40169999999999995
- type: nauc_precision_at_1000_diff1
value: -18.0644
- type: nauc_mrr_at_1_max
value: 41.3069
- type: nauc_mrr_at_1_std
value: -3.9859999999999998
- type: nauc_mrr_at_1_diff1
value: 56.355900000000005
- type: nauc_mrr_at_3_max
value: 41.626200000000004
- type: nauc_mrr_at_3_std
value: -0.7362
- type: nauc_mrr_at_3_diff1
value: 52.7305
- type: nauc_mrr_at_5_max
value: 41.341499999999996
- type: nauc_mrr_at_5_std
value: -1.113
- type: nauc_mrr_at_5_diff1
value: 52.159299999999995
- type: nauc_mrr_at_10_max
value: 40.9696
- type: nauc_mrr_at_10_std
value: -0.7545999999999999
- type: nauc_mrr_at_10_diff1
value: 51.9591
- type: nauc_mrr_at_20_max
value: 41.0028
- type: nauc_mrr_at_20_std
value: -0.5925
- type: nauc_mrr_at_20_diff1
value: 52.0497
- type: nauc_mrr_at_100_max
value: 41.0447
- type: nauc_mrr_at_100_std
value: -0.6299
- type: nauc_mrr_at_100_diff1
value: 52.2239
- type: nauc_mrr_at_1000_max
value: 41.045
- type: nauc_mrr_at_1000_std
value: -0.6354000000000001
- type: nauc_mrr_at_1000_diff1
value: 52.2368
- type: main_score
value: 57.247
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: ndcg_at_1
value: 46.497
- type: ndcg_at_3
value: 50.580999999999996
- type: ndcg_at_5
value: 52.159
- type: ndcg_at_10
value: 54.269999999999996
- type: ndcg_at_20
value: 56.001
- type: ndcg_at_100
value: 58.373
- type: ndcg_at_1000
value: 60.272000000000006
- type: map_at_1
value: 36.296
- type: map_at_3
value: 45.178000000000004
- type: map_at_5
value: 46.96
- type: map_at_10
value: 48.388999999999996
- type: map_at_20
value: 49.164
- type: map_at_100
value: 49.769000000000005
- type: map_at_1000
value: 49.906
- type: recall_at_1
value: 36.296
- type: recall_at_3
value: 51.505
- type: recall_at_5
value: 56.633
- type: recall_at_10
value: 63.519999999999996
- type: recall_at_20
value: 70.06099999999999
- type: recall_at_100
value: 80.783
- type: recall_at_1000
value: 92.396
- type: precision_at_1
value: 46.497
- type: precision_at_3
value: 25.032
- type: precision_at_5
value: 17.427
- type: precision_at_10
value: 10.433
- type: precision_at_20
value: 6.085999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.20600000000000002
- type: mrr_at_1
value: 46.4968
- type: mrr_at_3
value: 52.9193
- type: mrr_at_5
value: 54.0117
- type: mrr_at_10
value: 54.815400000000004
- type: mrr_at_20
value: 55.2265
- type: mrr_at_100
value: 55.4713
- type: mrr_at_1000
value: 55.51239999999999
- type: nauc_ndcg_at_1_max
value: 44.406400000000005
- type: nauc_ndcg_at_1_std
value: -0.5542
- type: nauc_ndcg_at_1_diff1
value: 61.951
- type: nauc_ndcg_at_3_max
value: 41.765
- type: nauc_ndcg_at_3_std
value: -1.6297
- type: nauc_ndcg_at_3_diff1
value: 56.6064
- type: nauc_ndcg_at_5_max
value: 41.2399
- type: nauc_ndcg_at_5_std
value: -1.6739
- type: nauc_ndcg_at_5_diff1
value: 56.736
- type: nauc_ndcg_at_10_max
value: 41.3525
- type: nauc_ndcg_at_10_std
value: -1.0736
- type: nauc_ndcg_at_10_diff1
value: 56.619600000000005
- type: nauc_ndcg_at_20_max
value: 40.950900000000004
- type: nauc_ndcg_at_20_std
value: 0.2865
- type: nauc_ndcg_at_20_diff1
value: 56.09459999999999
- type: nauc_ndcg_at_100_max
value: 41.4473
- type: nauc_ndcg_at_100_std
value: 1.9026999999999998
- type: nauc_ndcg_at_100_diff1
value: 55.6213
- type: nauc_ndcg_at_1000_max
value: 42.2036
- type: nauc_ndcg_at_1000_std
value: 2.2438
- type: nauc_ndcg_at_1000_diff1
value: 55.9627
- type: nauc_map_at_1_max
value: 32.782
- type: nauc_map_at_1_std
value: -9.6273
- type: nauc_map_at_1_diff1
value: 61.260099999999994
- type: nauc_map_at_3_max
value: 37.1299
- type: nauc_map_at_3_std
value: -7.119300000000001
- type: nauc_map_at_3_diff1
value: 58.1535
- type: nauc_map_at_5_max
value: 37.9225
- type: nauc_map_at_5_std
value: -5.9012
- type: nauc_map_at_5_diff1
value: 57.781499999999994
- type: nauc_map_at_10_max
value: 38.8891
- type: nauc_map_at_10_std
value: -4.874499999999999
- type: nauc_map_at_10_diff1
value: 57.755500000000005
- type: nauc_map_at_20_max
value: 39.3425
- type: nauc_map_at_20_std
value: -3.794
- type: nauc_map_at_20_diff1
value: 57.513400000000004
- type: nauc_map_at_100_max
value: 39.896
- type: nauc_map_at_100_std
value: -2.9215
- type: nauc_map_at_100_diff1
value: 57.53549999999999
- type: nauc_map_at_1000_max
value: 40.008300000000006
- type: nauc_map_at_1000_std
value: -2.8195
- type: nauc_map_at_1000_diff1
value: 57.60150000000001
- type: nauc_recall_at_1_max
value: 32.782
- type: nauc_recall_at_1_std
value: -9.6273
- type: nauc_recall_at_1_diff1
value: 61.260099999999994
- type: nauc_recall_at_3_max
value: 35.434599999999996
- type: nauc_recall_at_3_std
value: -6.290800000000001
- type: nauc_recall_at_3_diff1
value: 52.7104
- type: nauc_recall_at_5_max
value: 35.0165
- type: nauc_recall_at_5_std
value: -3.9936
- type: nauc_recall_at_5_diff1
value: 51.3274
- type: nauc_recall_at_10_max
value: 35.5228
- type: nauc_recall_at_10_std
value: -1.5428000000000002
- type: nauc_recall_at_10_diff1
value: 49.479299999999995
- type: nauc_recall_at_20_max
value: 33.7227
- type: nauc_recall_at_20_std
value: 4.9009
- type: nauc_recall_at_20_diff1
value: 45.5752
- type: nauc_recall_at_100_max
value: 35.9763
- type: nauc_recall_at_100_std
value: 19.3131
- type: nauc_recall_at_100_diff1
value: 39.911
- type: nauc_recall_at_1000_max
value: 46.325
- type: nauc_recall_at_1000_std
value: 44.9506
- type: nauc_recall_at_1000_diff1
value: 33.457100000000004
- type: nauc_precision_at_1_max
value: 44.406400000000005
- type: nauc_precision_at_1_std
value: -0.5542
- type: nauc_precision_at_1_diff1
value: 61.951
- type: nauc_precision_at_3_max
value: 37.5506
- type: nauc_precision_at_3_std
value: 11.0345
- type: nauc_precision_at_3_diff1
value: 29.8222
- type: nauc_precision_at_5_max
value: 35.2397
- type: nauc_precision_at_5_std
value: 15.389800000000001
- type: nauc_precision_at_5_diff1
value: 21.4168
- type: nauc_precision_at_10_max
value: 32.9495
- type: nauc_precision_at_10_std
value: 21.513299999999997
- type: nauc_precision_at_10_diff1
value: 11.8737
- type: nauc_precision_at_20_max
value: 29.1383
- type: nauc_precision_at_20_std
value: 27.7364
- type: nauc_precision_at_20_diff1
value: 3.9266
- type: nauc_precision_at_100_max
value: 25.405
- type: nauc_precision_at_100_std
value: 32.3313
- type: nauc_precision_at_100_diff1
value: -3.4574000000000003
- type: nauc_precision_at_1000_max
value: 19.0155
- type: nauc_precision_at_1000_std
value: 28.895
- type: nauc_precision_at_1000_diff1
value: -8.5325
- type: nauc_mrr_at_1_max
value: 44.406400000000005
- type: nauc_mrr_at_1_std
value: -0.5542
- type: nauc_mrr_at_1_diff1
value: 61.951
- type: nauc_mrr_at_3_max
value: 45.171499999999995
- type: nauc_mrr_at_3_std
value: 1.3709
- type: nauc_mrr_at_3_diff1
value: 58.655199999999994
- type: nauc_mrr_at_5_max
value: 44.770700000000005
- type: nauc_mrr_at_5_std
value: 1.4206999999999999
- type: nauc_mrr_at_5_diff1
value: 58.5418
- type: nauc_mrr_at_10_max
value: 44.6537
- type: nauc_mrr_at_10_std
value: 1.6499
- type: nauc_mrr_at_10_diff1
value: 58.305099999999996
- type: nauc_mrr_at_20_max
value: 44.5462
- type: nauc_mrr_at_20_std
value: 1.8207
- type: nauc_mrr_at_20_diff1
value: 58.2175
- type: nauc_mrr_at_100_max
value: 44.5707
- type: nauc_mrr_at_100_std
value: 1.9595999999999998
- type: nauc_mrr_at_100_diff1
value: 58.1794
- type: nauc_mrr_at_1000_max
value: 44.5849
- type: nauc_mrr_at_1000_std
value: 1.9480999999999997
- type: nauc_mrr_at_1000_diff1
value: 58.1948
- type: main_score
value: 54.269999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: ndcg_at_1
value: 50.470000000000006
- type: ndcg_at_3
value: 57.473
- type: ndcg_at_5
value: 60.331999999999994
- type: ndcg_at_10
value: 63.253
- type: ndcg_at_20
value: 64.934
- type: ndcg_at_100
value: 66.893
- type: ndcg_at_1000
value: 67.635
- type: map_at_1
value: 43.495
- type: map_at_3
value: 53.501
- type: map_at_5
value: 55.591
- type: map_at_10
value: 57.143
- type: map_at_20
value: 57.772999999999996
- type: map_at_100
value: 58.144
- type: map_at_1000
value: 58.18299999999999
- type: recall_at_1
value: 43.495
- type: recall_at_3
value: 61.943000000000005
- type: recall_at_5
value: 69.06
- type: recall_at_10
value: 77.506
- type: recall_at_20
value: 83.718
- type: recall_at_100
value: 93.012
- type: recall_at_1000
value: 98.20100000000001
- type: precision_at_1
value: 50.470000000000006
- type: precision_at_3
value: 25.789
- type: precision_at_5
value: 17.718
- type: precision_at_10
value: 10.257
- type: precision_at_20
value: 5.652
- type: precision_at_100
value: 1.2930000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: mrr_at_1
value: 50.4702
- type: mrr_at_3
value: 58.4013
- type: mrr_at_5
value: 59.8495
- type: mrr_at_10
value: 60.83070000000001
- type: mrr_at_20
value: 61.2041
- type: mrr_at_100
value: 61.4112
- type: mrr_at_1000
value: 61.4322
- type: nauc_ndcg_at_1_max
value: 44.8291
- type: nauc_ndcg_at_1_std
value: -0.8412
- type: nauc_ndcg_at_1_diff1
value: 59.08560000000001
- type: nauc_ndcg_at_3_max
value: 44.248
- type: nauc_ndcg_at_3_std
value: -1.4833
- type: nauc_ndcg_at_3_diff1
value: 54.230599999999995
- type: nauc_ndcg_at_5_max
value: 44.1923
- type: nauc_ndcg_at_5_std
value: 0.4302
- type: nauc_ndcg_at_5_diff1
value: 53.488
- type: nauc_ndcg_at_10_max
value: 44.6104
- type: nauc_ndcg_at_10_std
value: 1.0661
- type: nauc_ndcg_at_10_diff1
value: 53.3798
- type: nauc_ndcg_at_20_max
value: 44.9242
- type: nauc_ndcg_at_20_std
value: 2.2277
- type: nauc_ndcg_at_20_diff1
value: 53.317400000000006
- type: nauc_ndcg_at_100_max
value: 45.7183
- type: nauc_ndcg_at_100_std
value: 2.5153
- type: nauc_ndcg_at_100_diff1
value: 53.9388
- type: nauc_ndcg_at_1000_max
value: 45.6995
- type: nauc_ndcg_at_1000_std
value: 2.1667
- type: nauc_ndcg_at_1000_diff1
value: 54.2102
- type: nauc_map_at_1_max
value: 35.5633
- type: nauc_map_at_1_std
value: -5.6190999999999995
- type: nauc_map_at_1_diff1
value: 60.361
- type: nauc_map_at_3_max
value: 41.9146
- type: nauc_map_at_3_std
value: -3.4212
- type: nauc_map_at_3_diff1
value: 56.016999999999996
- type: nauc_map_at_5_max
value: 42.4117
- type: nauc_map_at_5_std
value: -2.0291
- type: nauc_map_at_5_diff1
value: 55.3653
- type: nauc_map_at_10_max
value: 43.1409
- type: nauc_map_at_10_std
value: -1.4177
- type: nauc_map_at_10_diff1
value: 55.2121
- type: nauc_map_at_20_max
value: 43.437799999999996
- type: nauc_map_at_20_std
value: -0.8902
- type: nauc_map_at_20_diff1
value: 55.206999999999994
- type: nauc_map_at_100_max
value: 43.6537
- type: nauc_map_at_100_std
value: -0.7274
- type: nauc_map_at_100_diff1
value: 55.2408
- type: nauc_map_at_1000_max
value: 43.6736
- type: nauc_map_at_1000_std
value: -0.7106
- type: nauc_map_at_1000_diff1
value: 55.2524
- type: nauc_recall_at_1_max
value: 35.5633
- type: nauc_recall_at_1_std
value: -5.6190999999999995
- type: nauc_recall_at_1_diff1
value: 60.361
- type: nauc_recall_at_3_max
value: 40.8742
- type: nauc_recall_at_3_std
value: -2.5945
- type: nauc_recall_at_3_diff1
value: 49.3961
- type: nauc_recall_at_5_max
value: 40.7505
- type: nauc_recall_at_5_std
value: 3.3495
- type: nauc_recall_at_5_diff1
value: 45.7721
- type: nauc_recall_at_10_max
value: 41.5818
- type: nauc_recall_at_10_std
value: 6.7775
- type: nauc_recall_at_10_diff1
value: 43.3298
- type: nauc_recall_at_20_max
value: 43.7288
- type: nauc_recall_at_20_std
value: 16.328799999999998
- type: nauc_recall_at_20_diff1
value: 39.9662
- type: nauc_recall_at_100_max
value: 55.63550000000001
- type: nauc_recall_at_100_std
value: 34.4464
- type: nauc_recall_at_100_diff1
value: 39.980399999999996
- type: nauc_recall_at_1000_max
value: 77.1968
- type: nauc_recall_at_1000_std
value: 71.4214
- type: nauc_recall_at_1000_diff1
value: 38.244
- type: nauc_precision_at_1_max
value: 44.8291
- type: nauc_precision_at_1_std
value: -0.8412
- type: nauc_precision_at_1_diff1
value: 59.08560000000001
- type: nauc_precision_at_3_max
value: 43.1877
- type: nauc_precision_at_3_std
value: 7.6498
- type: nauc_precision_at_3_diff1
value: 27.5239
- type: nauc_precision_at_5_max
value: 37.533899999999996
- type: nauc_precision_at_5_std
value: 13.2708
- type: nauc_precision_at_5_diff1
value: 16.1311
- type: nauc_precision_at_10_max
value: 33.2608
- type: nauc_precision_at_10_std
value: 17.788899999999998
- type: nauc_precision_at_10_diff1
value: 5.1528
- type: nauc_precision_at_20_max
value: 29.401
- type: nauc_precision_at_20_std
value: 22.9012
- type: nauc_precision_at_20_diff1
value: -2.9066
- type: nauc_precision_at_100_max
value: 23.2408
- type: nauc_precision_at_100_std
value: 24.2959
- type: nauc_precision_at_100_diff1
value: -12.2627
- type: nauc_precision_at_1000_max
value: 20.244300000000003
- type: nauc_precision_at_1000_std
value: 25.682100000000002
- type: nauc_precision_at_1000_diff1
value: -16.4621
- type: nauc_mrr_at_1_max
value: 44.8291
- type: nauc_mrr_at_1_std
value: -0.8412
- type: nauc_mrr_at_1_diff1
value: 59.08560000000001
- type: nauc_mrr_at_3_max
value: 45.9874
- type: nauc_mrr_at_3_std
value: -0.4012
- type: nauc_mrr_at_3_diff1
value: 55.1381
- type: nauc_mrr_at_5_max
value: 46.1061
- type: nauc_mrr_at_5_std
value: 0.9194000000000001
- type: nauc_mrr_at_5_diff1
value: 54.799699999999994
- type: nauc_mrr_at_10_max
value: 46.0658
- type: nauc_mrr_at_10_std
value: 0.9317000000000001
- type: nauc_mrr_at_10_diff1
value: 54.918
- type: nauc_mrr_at_20_max
value: 46.135999999999996
- type: nauc_mrr_at_20_std
value: 1.1449
- type: nauc_mrr_at_20_diff1
value: 54.8537
- type: nauc_mrr_at_100_max
value: 46.1801
- type: nauc_mrr_at_100_std
value: 1.1052
- type: nauc_mrr_at_100_diff1
value: 54.9671
- type: nauc_mrr_at_1000_max
value: 46.169399999999996
- type: nauc_mrr_at_1000_std
value: 1.0886
- type: nauc_mrr_at_1000_diff1
value: 54.974500000000006
- type: main_score
value: 63.253
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: ndcg_at_1
value: 30.169
- type: ndcg_at_3
value: 38.712999999999994
- type: ndcg_at_5
value: 41.613
- type: ndcg_at_10
value: 44.721
- type: ndcg_at_20
value: 46.861999999999995
- type: ndcg_at_100
value: 49.725
- type: ndcg_at_1000
value: 51.321000000000005
- type: map_at_1
value: 27.534
- type: map_at_3
value: 35.543
- type: map_at_5
value: 37.289
- type: map_at_10
value: 38.7
- type: map_at_20
value: 39.338
- type: map_at_100
value: 39.785
- type: map_at_1000
value: 39.853
- type: recall_at_1
value: 27.534
- type: recall_at_3
value: 45.007999999999996
- type: recall_at_5
value: 51.888999999999996
- type: recall_at_10
value: 61.023
- type: recall_at_20
value: 69.053
- type: recall_at_100
value: 83.42399999999999
- type: recall_at_1000
value: 95.268
- type: precision_at_1
value: 30.169
- type: precision_at_3
value: 16.911
- type: precision_at_5
value: 11.932
- type: precision_at_10
value: 7.141
- type: precision_at_20
value: 4.079
- type: precision_at_100
value: 1.008
- type: precision_at_1000
value: 0.11800000000000001
- type: mrr_at_1
value: 30.1695
- type: mrr_at_3
value: 37.966100000000004
- type: mrr_at_5
value: 39.6158
- type: mrr_at_10
value: 40.8033
- type: mrr_at_20
value: 41.3435
- type: mrr_at_100
value: 41.7118
- type: mrr_at_1000
value: 41.7613
- type: nauc_ndcg_at_1_max
value: 32.2399
- type: nauc_ndcg_at_1_std
value: -1.8931
- type: nauc_ndcg_at_1_diff1
value: 47.5451
- type: nauc_ndcg_at_3_max
value: 27.8837
- type: nauc_ndcg_at_3_std
value: -5.9821
- type: nauc_ndcg_at_3_diff1
value: 39.014500000000005
- type: nauc_ndcg_at_5_max
value: 27.631299999999996
- type: nauc_ndcg_at_5_std
value: -5.3482
- type: nauc_ndcg_at_5_diff1
value: 38.4224
- type: nauc_ndcg_at_10_max
value: 28.481499999999997
- type: nauc_ndcg_at_10_std
value: -3.1228
- type: nauc_ndcg_at_10_diff1
value: 37.3671
- type: nauc_ndcg_at_20_max
value: 30.317899999999998
- type: nauc_ndcg_at_20_std
value: -1.5421
- type: nauc_ndcg_at_20_diff1
value: 36.929
- type: nauc_ndcg_at_100_max
value: 30.622300000000003
- type: nauc_ndcg_at_100_std
value: -1.1266
- type: nauc_ndcg_at_100_diff1
value: 37.844100000000005
- type: nauc_ndcg_at_1000_max
value: 30.2567
- type: nauc_ndcg_at_1000_std
value: -1.6924000000000001
- type: nauc_ndcg_at_1000_diff1
value: 38.3512
- type: nauc_map_at_1_max
value: 28.498800000000003
- type: nauc_map_at_1_std
value: -4.840400000000001
- type: nauc_map_at_1_diff1
value: 49.2647
- type: nauc_map_at_3_max
value: 27.3679
- type: nauc_map_at_3_std
value: -5.953399999999999
- type: nauc_map_at_3_diff1
value: 41.458800000000004
- type: nauc_map_at_5_max
value: 27.501900000000003
- type: nauc_map_at_5_std
value: -5.5226
- type: nauc_map_at_5_diff1
value: 41.2074
- type: nauc_map_at_10_max
value: 28.1719
- type: nauc_map_at_10_std
value: -4.3525
- type: nauc_map_at_10_diff1
value: 40.820299999999996
- type: nauc_map_at_20_max
value: 28.67
- type: nauc_map_at_20_std
value: -3.9206
- type: nauc_map_at_20_diff1
value: 40.7258
- type: nauc_map_at_100_max
value: 28.765
- type: nauc_map_at_100_std
value: -3.8413999999999997
- type: nauc_map_at_100_diff1
value: 40.876200000000004
- type: nauc_map_at_1000_max
value: 28.7672
- type: nauc_map_at_1000_std
value: -3.8369
- type: nauc_map_at_1000_diff1
value: 40.8998
- type: nauc_recall_at_1_max
value: 28.498800000000003
- type: nauc_recall_at_1_std
value: -4.840400000000001
- type: nauc_recall_at_1_diff1
value: 49.2647
- type: nauc_recall_at_3_max
value: 24.6035
- type: nauc_recall_at_3_std
value: -7.5891
- type: nauc_recall_at_3_diff1
value: 31.5757
- type: nauc_recall_at_5_max
value: 24.369
- type: nauc_recall_at_5_std
value: -6.1809
- type: nauc_recall_at_5_diff1
value: 29.604000000000003
- type: nauc_recall_at_10_max
value: 26.289299999999997
- type: nauc_recall_at_10_std
value: 0.3042
- type: nauc_recall_at_10_diff1
value: 25.5857
- type: nauc_recall_at_20_max
value: 34.4737
- type: nauc_recall_at_20_std
value: 7.437199999999999
- type: nauc_recall_at_20_diff1
value: 22.174599999999998
- type: nauc_recall_at_100_max
value: 38.6347
- type: nauc_recall_at_100_std
value: 16.145300000000002
- type: nauc_recall_at_100_diff1
value: 22.1917
- type: nauc_recall_at_1000_max
value: 39.3542
- type: nauc_recall_at_1000_std
value: 22.6306
- type: nauc_recall_at_1000_diff1
value: 12.8224
- type: nauc_precision_at_1_max
value: 32.2399
- type: nauc_precision_at_1_std
value: -1.8931
- type: nauc_precision_at_1_diff1
value: 47.5451
- type: nauc_precision_at_3_max
value: 30.2123
- type: nauc_precision_at_3_std
value: -3.3314999999999997
- type: nauc_precision_at_3_diff1
value: 27.9644
- type: nauc_precision_at_5_max
value: 29.670800000000003
- type: nauc_precision_at_5_std
value: -1.5582
- type: nauc_precision_at_5_diff1
value: 25.608399999999996
- type: nauc_precision_at_10_max
value: 31.0615
- type: nauc_precision_at_10_std
value: 6.0033
- type: nauc_precision_at_10_diff1
value: 18.8733
- type: nauc_precision_at_20_max
value: 34.6328
- type: nauc_precision_at_20_std
value: 13.439799999999998
- type: nauc_precision_at_20_diff1
value: 13.048599999999999
- type: nauc_precision_at_100_max
value: 29.820200000000003
- type: nauc_precision_at_100_std
value: 17.5458
- type: nauc_precision_at_100_diff1
value: 5.4185
- type: nauc_precision_at_1000_max
value: 18.1553
- type: nauc_precision_at_1000_std
value: 13.908999999999999
- type: nauc_precision_at_1000_diff1
value: -6.718300000000001
- type: nauc_mrr_at_1_max
value: 32.2399
- type: nauc_mrr_at_1_std
value: -1.8931
- type: nauc_mrr_at_1_diff1
value: 47.5451
- type: nauc_mrr_at_3_max
value: 30.0502
- type: nauc_mrr_at_3_std
value: -3.7917
- type: nauc_mrr_at_3_diff1
value: 40.4491
- type: nauc_mrr_at_5_max
value: 29.9223
- type: nauc_mrr_at_5_std
value: -3.7439
- type: nauc_mrr_at_5_diff1
value: 40.2078
- type: nauc_mrr_at_10_max
value: 30.0837
- type: nauc_mrr_at_10_std
value: -3.1279000000000003
- type: nauc_mrr_at_10_diff1
value: 39.8604
- type: nauc_mrr_at_20_max
value: 30.586600000000004
- type: nauc_mrr_at_20_std
value: -2.6394
- type: nauc_mrr_at_20_diff1
value: 39.7155
- type: nauc_mrr_at_100_max
value: 30.6013
- type: nauc_mrr_at_100_std
value: -2.6131
- type: nauc_mrr_at_100_diff1
value: 39.8572
- type: nauc_mrr_at_1000_max
value: 30.588700000000003
- type: nauc_mrr_at_1000_std
value: -2.6162
- type: nauc_mrr_at_1000_diff1
value: 39.8847
- type: main_score
value: 44.721
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: ndcg_at_1
value: 25.995
- type: ndcg_at_3
value: 31.527
- type: ndcg_at_5
value: 34.498
- type: ndcg_at_10
value: 37.421
- type: ndcg_at_20
value: 40.19
- type: ndcg_at_100
value: 43.662
- type: ndcg_at_1000
value: 46.04
- type: map_at_1
value: 20.624000000000002
- type: map_at_3
value: 27.714
- type: map_at_5
value: 29.746
- type: map_at_10
value: 31.156
- type: map_at_20
value: 32.018
- type: map_at_100
value: 32.647
- type: map_at_1000
value: 32.751999999999995
- type: recall_at_1
value: 20.624000000000002
- type: recall_at_3
value: 35.064
- type: recall_at_5
value: 42.561
- type: recall_at_10
value: 51.294
- type: recall_at_20
value: 61.35699999999999
- type: recall_at_100
value: 77.41
- type: recall_at_1000
value: 94.292
- type: precision_at_1
value: 25.995
- type: precision_at_3
value: 15.547
- type: precision_at_5
value: 11.616999999999999
- type: precision_at_10
value: 7.152
- type: precision_at_20
value: 4.335
- type: precision_at_100
value: 1.1769999999999998
- type: precision_at_1000
value: 0.149
- type: mrr_at_1
value: 25.995
- type: mrr_at_3
value: 33.354099999999995
- type: mrr_at_5
value: 35.1389
- type: mrr_at_10
value: 36.3085
- type: mrr_at_20
value: 36.9771
- type: mrr_at_100
value: 37.3263
- type: mrr_at_1000
value: 37.3819
- type: nauc_ndcg_at_1_max
value: 16.8793
- type: nauc_ndcg_at_1_std
value: 1.1622000000000001
- type: nauc_ndcg_at_1_diff1
value: 41.695
- type: nauc_ndcg_at_3_max
value: 15.7033
- type: nauc_ndcg_at_3_std
value: 1.6629999999999998
- type: nauc_ndcg_at_3_diff1
value: 36.4614
- type: nauc_ndcg_at_5_max
value: 15.5037
- type: nauc_ndcg_at_5_std
value: 2.9783
- type: nauc_ndcg_at_5_diff1
value: 35.1895
- type: nauc_ndcg_at_10_max
value: 14.0983
- type: nauc_ndcg_at_10_std
value: 3.2645
- type: nauc_ndcg_at_10_diff1
value: 34.6149
- type: nauc_ndcg_at_20_max
value: 15.567400000000001
- type: nauc_ndcg_at_20_std
value: 4.6046
- type: nauc_ndcg_at_20_diff1
value: 35.486200000000004
- type: nauc_ndcg_at_100_max
value: 16.6548
- type: nauc_ndcg_at_100_std
value: 6.6399
- type: nauc_ndcg_at_100_diff1
value: 35.268899999999995
- type: nauc_ndcg_at_1000_max
value: 16.7889
- type: nauc_ndcg_at_1000_std
value: 5.9258
- type: nauc_ndcg_at_1000_diff1
value: 36.0024
- type: nauc_map_at_1_max
value: 13.874400000000001
- type: nauc_map_at_1_std
value: 3.305
- type: nauc_map_at_1_diff1
value: 41.7572
- type: nauc_map_at_3_max
value: 14.519699999999998
- type: nauc_map_at_3_std
value: 2.3379
- type: nauc_map_at_3_diff1
value: 37.8774
- type: nauc_map_at_5_max
value: 14.702399999999999
- type: nauc_map_at_5_std
value: 2.7134
- type: nauc_map_at_5_diff1
value: 37.0712
- type: nauc_map_at_10_max
value: 14.2346
- type: nauc_map_at_10_std
value: 2.9902
- type: nauc_map_at_10_diff1
value: 36.7886
- type: nauc_map_at_20_max
value: 14.7155
- type: nauc_map_at_20_std
value: 3.4323
- type: nauc_map_at_20_diff1
value: 37.0342
- type: nauc_map_at_100_max
value: 15.015600000000001
- type: nauc_map_at_100_std
value: 3.8381
- type: nauc_map_at_100_diff1
value: 37.0107
- type: nauc_map_at_1000_max
value: 15.020800000000001
- type: nauc_map_at_1000_std
value: 3.8089999999999997
- type: nauc_map_at_1000_diff1
value: 37.0464
- type: nauc_recall_at_1_max
value: 13.874400000000001
- type: nauc_recall_at_1_std
value: 3.305
- type: nauc_recall_at_1_diff1
value: 41.7572
- type: nauc_recall_at_3_max
value: 14.191
- type: nauc_recall_at_3_std
value: 2.8631
- type: nauc_recall_at_3_diff1
value: 32.0865
- type: nauc_recall_at_5_max
value: 13.8317
- type: nauc_recall_at_5_std
value: 4.0751
- type: nauc_recall_at_5_diff1
value: 29.2578
- type: nauc_recall_at_10_max
value: 9.8311
- type: nauc_recall_at_10_std
value: 3.787
- type: nauc_recall_at_10_diff1
value: 27.0678
- type: nauc_recall_at_20_max
value: 14.6648
- type: nauc_recall_at_20_std
value: 8.6709
- type: nauc_recall_at_20_diff1
value: 28.928199999999997
- type: nauc_recall_at_100_max
value: 20.9186
- type: nauc_recall_at_100_std
value: 23.6257
- type: nauc_recall_at_100_diff1
value: 25.6988
- type: nauc_recall_at_1000_max
value: 38.6366
- type: nauc_recall_at_1000_std
value: 47.4022
- type: nauc_recall_at_1000_diff1
value: 29.6164
- type: nauc_precision_at_1_max
value: 16.8793
- type: nauc_precision_at_1_std
value: 1.1622000000000001
- type: nauc_precision_at_1_diff1
value: 41.695
- type: nauc_precision_at_3_max
value: 19.1054
- type: nauc_precision_at_3_std
value: -0.3239
- type: nauc_precision_at_3_diff1
value: 29.140700000000002
- type: nauc_precision_at_5_max
value: 18.3369
- type: nauc_precision_at_5_std
value: 2.2429
- type: nauc_precision_at_5_diff1
value: 23.5603
- type: nauc_precision_at_10_max
value: 14.048
- type: nauc_precision_at_10_std
value: 2.5635000000000003
- type: nauc_precision_at_10_diff1
value: 18.6389
- type: nauc_precision_at_20_max
value: 15.1054
- type: nauc_precision_at_20_std
value: 5.4473
- type: nauc_precision_at_20_diff1
value: 16.980999999999998
- type: nauc_precision_at_100_max
value: 12.1794
- type: nauc_precision_at_100_std
value: 7.657
- type: nauc_precision_at_100_diff1
value: 5.9291
- type: nauc_precision_at_1000_max
value: 7.6541999999999994
- type: nauc_precision_at_1000_std
value: -1.8911
- type: nauc_precision_at_1000_diff1
value: -0.042499999999999996
- type: nauc_mrr_at_1_max
value: 16.8793
- type: nauc_mrr_at_1_std
value: 1.1622000000000001
- type: nauc_mrr_at_1_diff1
value: 41.695
- type: nauc_mrr_at_3_max
value: 16.8712
- type: nauc_mrr_at_3_std
value: 1.9463000000000001
- type: nauc_mrr_at_3_diff1
value: 36.6252
- type: nauc_mrr_at_5_max
value: 16.9044
- type: nauc_mrr_at_5_std
value: 2.4106
- type: nauc_mrr_at_5_diff1
value: 36.2224
- type: nauc_mrr_at_10_max
value: 16.4922
- type: nauc_mrr_at_10_std
value: 2.0573
- type: nauc_mrr_at_10_diff1
value: 36.4031
- type: nauc_mrr_at_20_max
value: 16.9114
- type: nauc_mrr_at_20_std
value: 2.3496
- type: nauc_mrr_at_20_diff1
value: 36.592999999999996
- type: nauc_mrr_at_100_max
value: 16.9761
- type: nauc_mrr_at_100_std
value: 2.6144
- type: nauc_mrr_at_100_diff1
value: 36.5791
- type: nauc_mrr_at_1000_max
value: 16.97
- type: nauc_mrr_at_1000_std
value: 2.6048999999999998
- type: nauc_mrr_at_1000_diff1
value: 36.5997
- type: main_score
value: 37.421
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: ndcg_at_1
value: 43.118
- type: ndcg_at_3
value: 49.36
- type: ndcg_at_5
value: 52.339
- type: ndcg_at_10
value: 55.001
- type: ndcg_at_20
value: 57.08
- type: ndcg_at_100
value: 60.156
- type: ndcg_at_1000
value: 61.409000000000006
- type: map_at_1
value: 34.422999999999995
- type: map_at_3
value: 44.241
- type: map_at_5
value: 46.639
- type: map_at_10
value: 48.187999999999995
- type: map_at_20
value: 48.943999999999996
- type: map_at_100
value: 49.589
- type: map_at_1000
value: 49.683
- type: recall_at_1
value: 34.422999999999995
- type: recall_at_3
value: 53.005
- type: recall_at_5
value: 60.787
- type: recall_at_10
value: 68.947
- type: recall_at_20
value: 76.11099999999999
- type: recall_at_100
value: 90.093
- type: recall_at_1000
value: 97.616
- type: precision_at_1
value: 43.118
- type: precision_at_3
value: 24.03
- type: precision_at_5
value: 17.267
- type: precision_at_10
value: 10.221
- type: precision_at_20
value: 5.89
- type: precision_at_100
value: 1.503
- type: precision_at_1000
value: 0.178
- type: mrr_at_1
value: 43.1184
- type: mrr_at_3
value: 51.4277
- type: mrr_at_5
value: 53.054199999999994
- type: mrr_at_10
value: 54.0983
- type: mrr_at_20
value: 54.548300000000005
- type: mrr_at_100
value: 54.8195
- type: mrr_at_1000
value: 54.8432
- type: nauc_ndcg_at_1_max
value: 40.0497
- type: nauc_ndcg_at_1_std
value: -2.9893
- type: nauc_ndcg_at_1_diff1
value: 52.6498
- type: nauc_ndcg_at_3_max
value: 35.5227
- type: nauc_ndcg_at_3_std
value: -1.754
- type: nauc_ndcg_at_3_diff1
value: 47.9744
- type: nauc_ndcg_at_5_max
value: 36.2245
- type: nauc_ndcg_at_5_std
value: -1.2267000000000001
- type: nauc_ndcg_at_5_diff1
value: 47.6131
- type: nauc_ndcg_at_10_max
value: 36.8382
- type: nauc_ndcg_at_10_std
value: 0.0044
- type: nauc_ndcg_at_10_diff1
value: 48.5768
- type: nauc_ndcg_at_20_max
value: 36.5259
- type: nauc_ndcg_at_20_std
value: 0.6298
- type: nauc_ndcg_at_20_diff1
value: 48.3686
- type: nauc_ndcg_at_100_max
value: 37.8334
- type: nauc_ndcg_at_100_std
value: 1.4694
- type: nauc_ndcg_at_100_diff1
value: 48.629
- type: nauc_ndcg_at_1000_max
value: 38.1066
- type: nauc_ndcg_at_1000_std
value: 1.2034
- type: nauc_ndcg_at_1000_diff1
value: 48.7834
- type: nauc_map_at_1_max
value: 31.5692
- type: nauc_map_at_1_std
value: -5.4256
- type: nauc_map_at_1_diff1
value: 53.0706
- type: nauc_map_at_3_max
value: 33.3182
- type: nauc_map_at_3_std
value: -3.4004
- type: nauc_map_at_3_diff1
value: 49.742799999999995
- type: nauc_map_at_5_max
value: 34.745
- type: nauc_map_at_5_std
value: -2.6823
- type: nauc_map_at_5_diff1
value: 49.3108
- type: nauc_map_at_10_max
value: 35.6475
- type: nauc_map_at_10_std
value: -1.7001
- type: nauc_map_at_10_diff1
value: 49.7542
- type: nauc_map_at_20_max
value: 35.6477
- type: nauc_map_at_20_std
value: -1.4723
- type: nauc_map_at_20_diff1
value: 49.6954
- type: nauc_map_at_100_max
value: 35.9688
- type: nauc_map_at_100_std
value: -1.2590999999999999
- type: nauc_map_at_100_diff1
value: 49.661
- type: nauc_map_at_1000_max
value: 35.9962
- type: nauc_map_at_1000_std
value: -1.2567
- type: nauc_map_at_1000_diff1
value: 49.6742
- type: nauc_recall_at_1_max
value: 31.5692
- type: nauc_recall_at_1_std
value: -5.4256
- type: nauc_recall_at_1_diff1
value: 53.0706
- type: nauc_recall_at_3_max
value: 30.078500000000002
- type: nauc_recall_at_3_std
value: -1.6661
- type: nauc_recall_at_3_diff1
value: 43.605199999999996
- type: nauc_recall_at_5_max
value: 32.1419
- type: nauc_recall_at_5_std
value: -0.0245
- type: nauc_recall_at_5_diff1
value: 41.667100000000005
- type: nauc_recall_at_10_max
value: 33.3931
- type: nauc_recall_at_10_std
value: 4.3266
- type: nauc_recall_at_10_diff1
value: 43.1197
- type: nauc_recall_at_20_max
value: 29.959799999999998
- type: nauc_recall_at_20_std
value: 7.4322
- type: nauc_recall_at_20_diff1
value: 40.589999999999996
- type: nauc_recall_at_100_max
value: 35.565200000000004
- type: nauc_recall_at_100_std
value: 20.2683
- type: nauc_recall_at_100_diff1
value: 38.6228
- type: nauc_recall_at_1000_max
value: 57.227
- type: nauc_recall_at_1000_std
value: 45.2524
- type: nauc_recall_at_1000_diff1
value: 34.657700000000006
- type: nauc_precision_at_1_max
value: 40.0497
- type: nauc_precision_at_1_std
value: -2.9893
- type: nauc_precision_at_1_diff1
value: 52.6498
- type: nauc_precision_at_3_max
value: 33.8853
- type: nauc_precision_at_3_std
value: 5.0939
- type: nauc_precision_at_3_diff1
value: 27.423199999999998
- type: nauc_precision_at_5_max
value: 31.651
- type: nauc_precision_at_5_std
value: 7.5684000000000005
- type: nauc_precision_at_5_diff1
value: 17.874100000000002
- type: nauc_precision_at_10_max
value: 29.8653
- type: nauc_precision_at_10_std
value: 12.945699999999999
- type: nauc_precision_at_10_diff1
value: 11.091800000000001
- type: nauc_precision_at_20_max
value: 22.145300000000002
- type: nauc_precision_at_20_std
value: 14.2574
- type: nauc_precision_at_20_diff1
value: 2.8937999999999997
- type: nauc_precision_at_100_max
value: 13.1369
- type: nauc_precision_at_100_std
value: 14.579600000000001
- type: nauc_precision_at_100_diff1
value: -9.4206
- type: nauc_precision_at_1000_max
value: 3.6643000000000003
- type: nauc_precision_at_1000_std
value: 10.0171
- type: nauc_precision_at_1000_diff1
value: -14.5572
- type: nauc_mrr_at_1_max
value: 40.0497
- type: nauc_mrr_at_1_std
value: -2.9893
- type: nauc_mrr_at_1_diff1
value: 52.6498
- type: nauc_mrr_at_3_max
value: 39.2932
- type: nauc_mrr_at_3_std
value: -1.2786
- type: nauc_mrr_at_3_diff1
value: 48.8373
- type: nauc_mrr_at_5_max
value: 39.495999999999995
- type: nauc_mrr_at_5_std
value: -1.0756
- type: nauc_mrr_at_5_diff1
value: 48.6192
- type: nauc_mrr_at_10_max
value: 39.617200000000004
- type: nauc_mrr_at_10_std
value: -0.5789
- type: nauc_mrr_at_10_diff1
value: 49.0029
- type: nauc_mrr_at_20_max
value: 39.5561
- type: nauc_mrr_at_20_std
value: -0.5488
- type: nauc_mrr_at_20_diff1
value: 48.9423
- type: nauc_mrr_at_100_max
value: 39.5619
- type: nauc_mrr_at_100_std
value: -0.6031
- type: nauc_mrr_at_100_diff1
value: 49.0095
- type: nauc_mrr_at_1000_max
value: 39.5703
- type: nauc_mrr_at_1000_std
value: -0.618
- type: nauc_mrr_at_1000_diff1
value: 49.0181
- type: main_score
value: 55.001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: ndcg_at_1
value: 40.068
- type: ndcg_at_3
value: 44.096999999999994
- type: ndcg_at_5
value: 47.516999999999996
- type: ndcg_at_10
value: 50.446999999999996
- type: ndcg_at_20
value: 52.519000000000005
- type: ndcg_at_100
value: 55.833999999999996
- type: ndcg_at_1000
value: 57.43000000000001
- type: map_at_1
value: 31.428
- type: map_at_3
value: 39.353
- type: map_at_5
value: 42.094
- type: map_at_10
value: 43.725
- type: map_at_20
value: 44.471
- type: map_at_100
value: 45.117000000000004
- type: map_at_1000
value: 45.211
- type: recall_at_1
value: 31.428
- type: recall_at_3
value: 46.575
- type: recall_at_5
value: 55.515
- type: recall_at_10
value: 64.13799999999999
- type: recall_at_20
value: 71.279
- type: recall_at_100
value: 86.762
- type: recall_at_1000
value: 97.09100000000001
- type: precision_at_1
value: 40.068
- type: precision_at_3
value: 21.195
- type: precision_at_5
value: 15.822
- type: precision_at_10
value: 9.521
- type: precision_at_20
value: 5.497
- type: precision_at_100
value: 1.402
- type: precision_at_1000
value: 0.16999999999999998
- type: mrr_at_1
value: 40.0685
- type: mrr_at_3
value: 47.0129
- type: mrr_at_5
value: 48.856500000000004
- type: mrr_at_10
value: 49.9012
- type: mrr_at_20
value: 50.3737
- type: mrr_at_100
value: 50.7177
- type: mrr_at_1000
value: 50.756299999999996
- type: nauc_ndcg_at_1_max
value: 48.9094
- type: nauc_ndcg_at_1_std
value: 2.946
- type: nauc_ndcg_at_1_diff1
value: 50.792899999999996
- type: nauc_ndcg_at_3_max
value: 44.0954
- type: nauc_ndcg_at_3_std
value: 4.5854
- type: nauc_ndcg_at_3_diff1
value: 44.4164
- type: nauc_ndcg_at_5_max
value: 44.8318
- type: nauc_ndcg_at_5_std
value: 6.72
- type: nauc_ndcg_at_5_diff1
value: 44.5299
- type: nauc_ndcg_at_10_max
value: 45.9722
- type: nauc_ndcg_at_10_std
value: 7.829
- type: nauc_ndcg_at_10_diff1
value: 43.7881
- type: nauc_ndcg_at_20_max
value: 46.2939
- type: nauc_ndcg_at_20_std
value: 8.7342
- type: nauc_ndcg_at_20_diff1
value: 44.252900000000004
- type: nauc_ndcg_at_100_max
value: 46.6306
- type: nauc_ndcg_at_100_std
value: 9.4862
- type: nauc_ndcg_at_100_diff1
value: 44.168099999999995
- type: nauc_ndcg_at_1000_max
value: 46.679500000000004
- type: nauc_ndcg_at_1000_std
value: 8.5999
- type: nauc_ndcg_at_1000_diff1
value: 44.654500000000006
- type: nauc_map_at_1_max
value: 41.802499999999995
- type: nauc_map_at_1_std
value: -1.5448
- type: nauc_map_at_1_diff1
value: 51.3189
- type: nauc_map_at_3_max
value: 42.2779
- type: nauc_map_at_3_std
value: 2.4886
- type: nauc_map_at_3_diff1
value: 45.8305
- type: nauc_map_at_5_max
value: 43.6601
- type: nauc_map_at_5_std
value: 4.7118
- type: nauc_map_at_5_diff1
value: 45.8307
- type: nauc_map_at_10_max
value: 44.725300000000004
- type: nauc_map_at_10_std
value: 5.578799999999999
- type: nauc_map_at_10_diff1
value: 45.5339
- type: nauc_map_at_20_max
value: 44.979
- type: nauc_map_at_20_std
value: 5.9147
- type: nauc_map_at_20_diff1
value: 45.6175
- type: nauc_map_at_100_max
value: 45.202799999999996
- type: nauc_map_at_100_std
value: 6.1206000000000005
- type: nauc_map_at_100_diff1
value: 45.692899999999995
- type: nauc_map_at_1000_max
value: 45.2034
- type: nauc_map_at_1000_std
value: 6.097
- type: nauc_map_at_1000_diff1
value: 45.7149
- type: nauc_recall_at_1_max
value: 41.802499999999995
- type: nauc_recall_at_1_std
value: -1.5448
- type: nauc_recall_at_1_diff1
value: 51.3189
- type: nauc_recall_at_3_max
value: 38.0537
- type: nauc_recall_at_3_std
value: 4.067
- type: nauc_recall_at_3_diff1
value: 38.8726
- type: nauc_recall_at_5_max
value: 39.3573
- type: nauc_recall_at_5_std
value: 9.5685
- type: nauc_recall_at_5_diff1
value: 37.5925
- type: nauc_recall_at_10_max
value: 42.2936
- type: nauc_recall_at_10_std
value: 13.8155
- type: nauc_recall_at_10_diff1
value: 34.5176
- type: nauc_recall_at_20_max
value: 43.5257
- type: nauc_recall_at_20_std
value: 19.2427
- type: nauc_recall_at_20_diff1
value: 35.8971
- type: nauc_recall_at_100_max
value: 44.2485
- type: nauc_recall_at_100_std
value: 34.4593
- type: nauc_recall_at_100_diff1
value: 30.2192
- type: nauc_recall_at_1000_max
value: 56.7136
- type: nauc_recall_at_1000_std
value: 61.5111
- type: nauc_recall_at_1000_diff1
value: 32.9767
- type: nauc_precision_at_1_max
value: 48.9094
- type: nauc_precision_at_1_std
value: 2.946
- type: nauc_precision_at_1_diff1
value: 50.792899999999996
- type: nauc_precision_at_3_max
value: 42.5079
- type: nauc_precision_at_3_std
value: 12.2541
- type: nauc_precision_at_3_diff1
value: 28.8997
- type: nauc_precision_at_5_max
value: 38.399699999999996
- type: nauc_precision_at_5_std
value: 17.0376
- type: nauc_precision_at_5_diff1
value: 21.3869
- type: nauc_precision_at_10_max
value: 34.8194
- type: nauc_precision_at_10_std
value: 18.3221
- type: nauc_precision_at_10_diff1
value: 12.9642
- type: nauc_precision_at_20_max
value: 28.1161
- type: nauc_precision_at_20_std
value: 17.7852
- type: nauc_precision_at_20_diff1
value: 7.283199999999999
- type: nauc_precision_at_100_max
value: 15.3001
- type: nauc_precision_at_100_std
value: 12.8588
- type: nauc_precision_at_100_diff1
value: -3.2960000000000003
- type: nauc_precision_at_1000_max
value: 1.5663
- type: nauc_precision_at_1000_std
value: 1.7748
- type: nauc_precision_at_1000_diff1
value: -8.8656
- type: nauc_mrr_at_1_max
value: 48.9094
- type: nauc_mrr_at_1_std
value: 2.946
- type: nauc_mrr_at_1_diff1
value: 50.792899999999996
- type: nauc_mrr_at_3_max
value: 47.6835
- type: nauc_mrr_at_3_std
value: 5.3593
- type: nauc_mrr_at_3_diff1
value: 46.109
- type: nauc_mrr_at_5_max
value: 47.570299999999996
- type: nauc_mrr_at_5_std
value: 5.777299999999999
- type: nauc_mrr_at_5_diff1
value: 45.8975
- type: nauc_mrr_at_10_max
value: 47.7796
- type: nauc_mrr_at_10_std
value: 6.0919
- type: nauc_mrr_at_10_diff1
value: 45.593
- type: nauc_mrr_at_20_max
value: 47.783
- type: nauc_mrr_at_20_std
value: 6.1481
- type: nauc_mrr_at_20_diff1
value: 45.818999999999996
- type: nauc_mrr_at_100_max
value: 47.7483
- type: nauc_mrr_at_100_std
value: 6.1742
- type: nauc_mrr_at_100_diff1
value: 45.8742
- type: nauc_mrr_at_1000_max
value: 47.7586
- type: nauc_mrr_at_1000_std
value: 6.1544
- type: nauc_mrr_at_1000_diff1
value: 45.894
- type: main_score
value: 50.446999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 47.938333333333325
- type: ndcg_at_10
value: 47.938333333333325
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: ndcg_at_1
value: 32.362
- type: ndcg_at_3
value: 36.498000000000005
- type: ndcg_at_5
value: 38.729
- type: ndcg_at_10
value: 41.567
- type: ndcg_at_20
value: 43.247
- type: ndcg_at_100
value: 46.714
- type: ndcg_at_1000
value: 48.858000000000004
- type: map_at_1
value: 28.124
- type: map_at_3
value: 33.67
- type: map_at_5
value: 35.15
- type: map_at_10
value: 36.481
- type: map_at_20
value: 36.998
- type: map_at_100
value: 37.564
- type: map_at_1000
value: 37.651
- type: recall_at_1
value: 28.124
- type: recall_at_3
value: 39.623999999999995
- type: recall_at_5
value: 45.373999999999995
- type: recall_at_10
value: 53.761
- type: recall_at_20
value: 60.141999999999996
- type: recall_at_100
value: 77.145
- type: recall_at_1000
value: 92.809
- type: precision_at_1
value: 32.362
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.951
- type: precision_at_10
value: 6.718
- type: precision_at_20
value: 3.781
- type: precision_at_100
value: 1.011
- type: precision_at_1000
value: 0.126
- type: mrr_at_1
value: 32.362
- type: mrr_at_3
value: 37.5256
- type: mrr_at_5
value: 38.813900000000004
- type: mrr_at_10
value: 39.9955
- type: mrr_at_20
value: 40.4099
- type: mrr_at_100
value: 40.8508
- type: mrr_at_1000
value: 40.9125
- type: nauc_ndcg_at_1_max
value: 45.6295
- type: nauc_ndcg_at_1_std
value: -0.9893000000000001
- type: nauc_ndcg_at_1_diff1
value: 60.5302
- type: nauc_ndcg_at_3_max
value: 46.4109
- type: nauc_ndcg_at_3_std
value: 1.2275
- type: nauc_ndcg_at_3_diff1
value: 55.38999999999999
- type: nauc_ndcg_at_5_max
value: 44.9415
- type: nauc_ndcg_at_5_std
value: 1.221
- type: nauc_ndcg_at_5_diff1
value: 53.0919
- type: nauc_ndcg_at_10_max
value: 44.3985
- type: nauc_ndcg_at_10_std
value: 3.0568999999999997
- type: nauc_ndcg_at_10_diff1
value: 52.9713
- type: nauc_ndcg_at_20_max
value: 43.802400000000006
- type: nauc_ndcg_at_20_std
value: 3.5957000000000003
- type: nauc_ndcg_at_20_diff1
value: 51.765499999999996
- type: nauc_ndcg_at_100_max
value: 45.0729
- type: nauc_ndcg_at_100_std
value: 4.8267
- type: nauc_ndcg_at_100_diff1
value: 51.7238
- type: nauc_ndcg_at_1000_max
value: 45.4394
- type: nauc_ndcg_at_1000_std
value: 4.9554
- type: nauc_ndcg_at_1000_diff1
value: 52.7547
- type: nauc_map_at_1_max
value: 41.9895
- type: nauc_map_at_1_std
value: -3.7581999999999995
- type: nauc_map_at_1_diff1
value: 60.44310000000001
- type: nauc_map_at_3_max
value: 45.1699
- type: nauc_map_at_3_std
value: -0.23839999999999997
- type: nauc_map_at_3_diff1
value: 56.36600000000001
- type: nauc_map_at_5_max
value: 44.6674
- type: nauc_map_at_5_std
value: 0.2771
- type: nauc_map_at_5_diff1
value: 55.1832
- type: nauc_map_at_10_max
value: 44.5561
- type: nauc_map_at_10_std
value: 1.1383
- type: nauc_map_at_10_diff1
value: 55.199
- type: nauc_map_at_20_max
value: 44.426100000000005
- type: nauc_map_at_20_std
value: 1.2463
- type: nauc_map_at_20_diff1
value: 54.8601
- type: nauc_map_at_100_max
value: 44.6656
- type: nauc_map_at_100_std
value: 1.4344000000000001
- type: nauc_map_at_100_diff1
value: 54.8404
- type: nauc_map_at_1000_max
value: 44.6885
- type: nauc_map_at_1000_std
value: 1.4637
- type: nauc_map_at_1000_diff1
value: 54.879
- type: nauc_recall_at_1_max
value: 41.9895
- type: nauc_recall_at_1_std
value: -3.7581999999999995
- type: nauc_recall_at_1_diff1
value: 60.44310000000001
- type: nauc_recall_at_3_max
value: 44.267
- type: nauc_recall_at_3_std
value: 1.9900999999999998
- type: nauc_recall_at_3_diff1
value: 50.2301
- type: nauc_recall_at_5_max
value: 41.025800000000004
- type: nauc_recall_at_5_std
value: 1.7608
- type: nauc_recall_at_5_diff1
value: 44.4415
- type: nauc_recall_at_10_max
value: 39.1839
- type: nauc_recall_at_10_std
value: 7.1819
- type: nauc_recall_at_10_diff1
value: 43.2659
- type: nauc_recall_at_20_max
value: 35.619099999999996
- type: nauc_recall_at_20_std
value: 9.3642
- type: nauc_recall_at_20_diff1
value: 37.6444
- type: nauc_recall_at_100_max
value: 41.6917
- type: nauc_recall_at_100_std
value: 21.5639
- type: nauc_recall_at_100_diff1
value: 31.6295
- type: nauc_recall_at_1000_max
value: 48.0423
- type: nauc_recall_at_1000_std
value: 54.2675
- type: nauc_recall_at_1000_diff1
value: 29.726399999999998
- type: nauc_precision_at_1_max
value: 45.6295
- type: nauc_precision_at_1_std
value: -0.9893000000000001
- type: nauc_precision_at_1_diff1
value: 60.5302
- type: nauc_precision_at_3_max
value: 49.6365
- type: nauc_precision_at_3_std
value: 7.6746
- type: nauc_precision_at_3_diff1
value: 48.4452
- type: nauc_precision_at_5_max
value: 45.906400000000005
- type: nauc_precision_at_5_std
value: 10.616399999999999
- type: nauc_precision_at_5_diff1
value: 41.393299999999996
- type: nauc_precision_at_10_max
value: 40.9508
- type: nauc_precision_at_10_std
value: 14.360100000000001
- type: nauc_precision_at_10_diff1
value: 35.6927
- type: nauc_precision_at_20_max
value: 36.5696
- type: nauc_precision_at_20_std
value: 15.232499999999998
- type: nauc_precision_at_20_diff1
value: 28.5742
- type: nauc_precision_at_100_max
value: 29.4906
- type: nauc_precision_at_100_std
value: 18.8562
- type: nauc_precision_at_100_diff1
value: 14.689
- type: nauc_precision_at_1000_max
value: 16.2971
- type: nauc_precision_at_1000_std
value: 14.349
- type: nauc_precision_at_1000_diff1
value: 4.972300000000001
- type: nauc_mrr_at_1_max
value: 45.6295
- type: nauc_mrr_at_1_std
value: -0.9893000000000001
- type: nauc_mrr_at_1_diff1
value: 60.5302
- type: nauc_mrr_at_3_max
value: 47.340900000000005
- type: nauc_mrr_at_3_std
value: 1.9640000000000002
- type: nauc_mrr_at_3_diff1
value: 56.7908
- type: nauc_mrr_at_5_max
value: 46.8151
- type: nauc_mrr_at_5_std
value: 2.0004
- type: nauc_mrr_at_5_diff1
value: 55.49230000000001
- type: nauc_mrr_at_10_max
value: 46.5702
- type: nauc_mrr_at_10_std
value: 2.5755
- type: nauc_mrr_at_10_diff1
value: 55.3437
- type: nauc_mrr_at_20_max
value: 46.3775
- type: nauc_mrr_at_20_std
value: 2.7186
- type: nauc_mrr_at_20_diff1
value: 55.084
- type: nauc_mrr_at_100_max
value: 46.428599999999996
- type: nauc_mrr_at_100_std
value: 2.7332
- type: nauc_mrr_at_100_diff1
value: 55.088499999999996
- type: nauc_mrr_at_1000_max
value: 46.443200000000004
- type: nauc_mrr_at_1000_std
value: 2.7476000000000003
- type: nauc_mrr_at_1000_diff1
value: 55.1161
- type: main_score
value: 41.567
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: ndcg_at_1
value: 27.22
- type: ndcg_at_3
value: 31.52
- type: ndcg_at_5
value: 33.964
- type: ndcg_at_10
value: 36.581
- type: ndcg_at_20
value: 38.665
- type: ndcg_at_100
value: 42.268
- type: ndcg_at_1000
value: 44.748
- type: map_at_1
value: 22.017
- type: map_at_3
value: 27.927999999999997
- type: map_at_5
value: 29.668
- type: map_at_10
value: 31.002999999999997
- type: map_at_20
value: 31.685999999999996
- type: map_at_100
value: 32.29
- type: map_at_1000
value: 32.413
- type: recall_at_1
value: 22.017
- type: recall_at_3
value: 34.009
- type: recall_at_5
value: 40.416000000000004
- type: recall_at_10
value: 48.415
- type: recall_at_20
value: 56.038
- type: recall_at_100
value: 73.851
- type: recall_at_1000
value: 91.03999999999999
- type: precision_at_1
value: 27.22
- type: precision_at_3
value: 15.084
- type: precision_at_5
value: 11.094
- type: precision_at_10
value: 6.800000000000001
- type: precision_at_20
value: 4.054
- type: precision_at_100
value: 1.134
- type: precision_at_1000
value: 0.153
- type: mrr_at_1
value: 27.219500000000004
- type: mrr_at_3
value: 33.132600000000004
- type: mrr_at_5
value: 34.694900000000004
- type: mrr_at_10
value: 35.7418
- type: mrr_at_20
value: 36.273300000000006
- type: mrr_at_100
value: 36.6689
- type: mrr_at_1000
value: 36.7336
- type: nauc_ndcg_at_1_max
value: 26.5332
- type: nauc_ndcg_at_1_std
value: 6.7063999999999995
- type: nauc_ndcg_at_1_diff1
value: 41.7063
- type: nauc_ndcg_at_3_max
value: 26.294
- type: nauc_ndcg_at_3_std
value: 6.9978
- type: nauc_ndcg_at_3_diff1
value: 36.8322
- type: nauc_ndcg_at_5_max
value: 27.6001
- type: nauc_ndcg_at_5_std
value: 8.1389
- type: nauc_ndcg_at_5_diff1
value: 36.5894
- type: nauc_ndcg_at_10_max
value: 27.5265
- type: nauc_ndcg_at_10_std
value: 8.5525
- type: nauc_ndcg_at_10_diff1
value: 36.2451
- type: nauc_ndcg_at_20_max
value: 27.2368
- type: nauc_ndcg_at_20_std
value: 9.4667
- type: nauc_ndcg_at_20_diff1
value: 35.553000000000004
- type: nauc_ndcg_at_100_max
value: 27.6221
- type: nauc_ndcg_at_100_std
value: 10.9896
- type: nauc_ndcg_at_100_diff1
value: 35.010799999999996
- type: nauc_ndcg_at_1000_max
value: 27.823700000000002
- type: nauc_ndcg_at_1000_std
value: 10.5646
- type: nauc_ndcg_at_1000_diff1
value: 35.4455
- type: nauc_map_at_1_max
value: 24.0311
- type: nauc_map_at_1_std
value: 4.9532
- type: nauc_map_at_1_diff1
value: 41.0051
- type: nauc_map_at_3_max
value: 25.3489
- type: nauc_map_at_3_std
value: 6.229
- type: nauc_map_at_3_diff1
value: 37.5031
- type: nauc_map_at_5_max
value: 26.3791
- type: nauc_map_at_5_std
value: 7.036499999999999
- type: nauc_map_at_5_diff1
value: 37.3731
- type: nauc_map_at_10_max
value: 26.4979
- type: nauc_map_at_10_std
value: 7.3984
- type: nauc_map_at_10_diff1
value: 37.213499999999996
- type: nauc_map_at_20_max
value: 26.505000000000003
- type: nauc_map_at_20_std
value: 7.749300000000001
- type: nauc_map_at_20_diff1
value: 37.0079
- type: nauc_map_at_100_max
value: 26.625700000000002
- type: nauc_map_at_100_std
value: 8.0921
- type: nauc_map_at_100_diff1
value: 36.9709
- type: nauc_map_at_1000_max
value: 26.6505
- type: nauc_map_at_1000_std
value: 8.093599999999999
- type: nauc_map_at_1000_diff1
value: 37.0068
- type: nauc_recall_at_1_max
value: 24.0311
- type: nauc_recall_at_1_std
value: 4.9532
- type: nauc_recall_at_1_diff1
value: 41.0051
- type: nauc_recall_at_3_max
value: 24.7976
- type: nauc_recall_at_3_std
value: 6.1747000000000005
- type: nauc_recall_at_3_diff1
value: 33.226299999999995
- type: nauc_recall_at_5_max
value: 27.539599999999997
- type: nauc_recall_at_5_std
value: 8.8065
- type: nauc_recall_at_5_diff1
value: 32.2612
- type: nauc_recall_at_10_max
value: 26.8189
- type: nauc_recall_at_10_std
value: 9.7864
- type: nauc_recall_at_10_diff1
value: 30.2757
- type: nauc_recall_at_20_max
value: 25.183699999999998
- type: nauc_recall_at_20_std
value: 12.916
- type: nauc_recall_at_20_diff1
value: 27.092100000000002
- type: nauc_recall_at_100_max
value: 26.9612
- type: nauc_recall_at_100_std
value: 24.1506
- type: nauc_recall_at_100_diff1
value: 20.9473
- type: nauc_recall_at_1000_max
value: 34.2476
- type: nauc_recall_at_1000_std
value: 35.335499999999996
- type: nauc_recall_at_1000_diff1
value: 13.6745
- type: nauc_precision_at_1_max
value: 26.5332
- type: nauc_precision_at_1_std
value: 6.7063999999999995
- type: nauc_precision_at_1_diff1
value: 41.7063
- type: nauc_precision_at_3_max
value: 27.1377
- type: nauc_precision_at_3_std
value: 9.1475
- type: nauc_precision_at_3_diff1
value: 30.6856
- type: nauc_precision_at_5_max
value: 28.0165
- type: nauc_precision_at_5_std
value: 11.5134
- type: nauc_precision_at_5_diff1
value: 27.454600000000003
- type: nauc_precision_at_10_max
value: 25.393700000000003
- type: nauc_precision_at_10_std
value: 13.100000000000001
- type: nauc_precision_at_10_diff1
value: 23.296400000000002
- type: nauc_precision_at_20_max
value: 22.0302
- type: nauc_precision_at_20_std
value: 15.6886
- type: nauc_precision_at_20_diff1
value: 18.0761
- type: nauc_precision_at_100_max
value: 18.754
- type: nauc_precision_at_100_std
value: 18.049599999999998
- type: nauc_precision_at_100_diff1
value: 10.578999999999999
- type: nauc_precision_at_1000_max
value: 15.4445
- type: nauc_precision_at_1000_std
value: 10.5797
- type: nauc_precision_at_1000_diff1
value: 8.6555
- type: nauc_mrr_at_1_max
value: 26.5332
- type: nauc_mrr_at_1_std
value: 6.7063999999999995
- type: nauc_mrr_at_1_diff1
value: 41.7063
- type: nauc_mrr_at_3_max
value: 27.048
- type: nauc_mrr_at_3_std
value: 7.0742
- type: nauc_mrr_at_3_diff1
value: 38.388
- type: nauc_mrr_at_5_max
value: 27.6961
- type: nauc_mrr_at_5_std
value: 7.7979
- type: nauc_mrr_at_5_diff1
value: 38.2328
- type: nauc_mrr_at_10_max
value: 27.6906
- type: nauc_mrr_at_10_std
value: 7.8747
- type: nauc_mrr_at_10_diff1
value: 38.061099999999996
- type: nauc_mrr_at_20_max
value: 27.5596
- type: nauc_mrr_at_20_std
value: 8.1191
- type: nauc_mrr_at_20_diff1
value: 37.8976
- type: nauc_mrr_at_100_max
value: 27.596500000000002
- type: nauc_mrr_at_100_std
value: 8.2534
- type: nauc_mrr_at_100_diff1
value: 37.9023
- type: nauc_mrr_at_1000_max
value: 27.6032
- type: nauc_mrr_at_1000_std
value: 8.2402
- type: nauc_mrr_at_1000_diff1
value: 37.9149
- type: main_score
value: 36.581
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: ndcg_at_1
value: 37.687
- type: ndcg_at_3
value: 43.254
- type: ndcg_at_5
value: 45.518
- type: ndcg_at_10
value: 48.57
- type: ndcg_at_20
value: 50.870000000000005
- type: ndcg_at_100
value: 53.964999999999996
- type: ndcg_at_1000
value: 55.704
- type: map_at_1
value: 31.507
- type: map_at_3
value: 39.26
- type: map_at_5
value: 41.006
- type: map_at_10
value: 42.559999999999995
- type: map_at_20
value: 43.330999999999996
- type: map_at_100
value: 43.852000000000004
- type: map_at_1000
value: 43.939
- type: recall_at_1
value: 31.507
- type: recall_at_3
value: 47.019
- type: recall_at_5
value: 53.105999999999995
- type: recall_at_10
value: 62.061
- type: recall_at_20
value: 70.206
- type: recall_at_100
value: 84.87400000000001
- type: recall_at_1000
value: 96.627
- type: precision_at_1
value: 37.687
- type: precision_at_3
value: 20.055999999999997
- type: precision_at_5
value: 13.899000000000001
- type: precision_at_10
value: 8.34
- type: precision_at_20
value: 4.841
- type: precision_at_100
value: 1.236
- type: precision_at_1000
value: 0.148
- type: mrr_at_1
value: 37.6866
- type: mrr_at_3
value: 44.760600000000004
- type: mrr_at_5
value: 46.1505
- type: mrr_at_10
value: 47.2768
- type: mrr_at_20
value: 47.8296
- type: mrr_at_100
value: 48.147099999999995
- type: mrr_at_1000
value: 48.1922
- type: nauc_ndcg_at_1_max
value: 44.6488
- type: nauc_ndcg_at_1_std
value: 4.1984
- type: nauc_ndcg_at_1_diff1
value: 56.6913
- type: nauc_ndcg_at_3_max
value: 41.7574
- type: nauc_ndcg_at_3_std
value: 2.3933
- type: nauc_ndcg_at_3_diff1
value: 49.532900000000005
- type: nauc_ndcg_at_5_max
value: 40.976099999999995
- type: nauc_ndcg_at_5_std
value: 1.6297
- type: nauc_ndcg_at_5_diff1
value: 49.6749
- type: nauc_ndcg_at_10_max
value: 41.645199999999996
- type: nauc_ndcg_at_10_std
value: 3.6906000000000003
- type: nauc_ndcg_at_10_diff1
value: 48.6944
- type: nauc_ndcg_at_20_max
value: 42.5723
- type: nauc_ndcg_at_20_std
value: 5.2003
- type: nauc_ndcg_at_20_diff1
value: 49.2712
- type: nauc_ndcg_at_100_max
value: 43.1712
- type: nauc_ndcg_at_100_std
value: 5.9054
- type: nauc_ndcg_at_100_diff1
value: 49.4614
- type: nauc_ndcg_at_1000_max
value: 43.051899999999996
- type: nauc_ndcg_at_1000_std
value: 5.2268
- type: nauc_ndcg_at_1000_diff1
value: 49.9268
- type: nauc_map_at_1_max
value: 40.5466
- type: nauc_map_at_1_std
value: 2.5801000000000003
- type: nauc_map_at_1_diff1
value: 57.2554
- type: nauc_map_at_3_max
value: 40.7917
- type: nauc_map_at_3_std
value: 1.926
- type: nauc_map_at_3_diff1
value: 51.5164
- type: nauc_map_at_5_max
value: 40.8638
- type: nauc_map_at_5_std
value: 1.9499
- type: nauc_map_at_5_diff1
value: 51.4491
- type: nauc_map_at_10_max
value: 41.4658
- type: nauc_map_at_10_std
value: 2.8266
- type: nauc_map_at_10_diff1
value: 50.9612
- type: nauc_map_at_20_max
value: 41.894999999999996
- type: nauc_map_at_20_std
value: 3.3461
- type: nauc_map_at_20_diff1
value: 51.0836
- type: nauc_map_at_100_max
value: 42.0161
- type: nauc_map_at_100_std
value: 3.4995
- type: nauc_map_at_100_diff1
value: 51.083
- type: nauc_map_at_1000_max
value: 42.0041
- type: nauc_map_at_1000_std
value: 3.4733
- type: nauc_map_at_1000_diff1
value: 51.1013
- type: nauc_recall_at_1_max
value: 40.5466
- type: nauc_recall_at_1_std
value: 2.5801000000000003
- type: nauc_recall_at_1_diff1
value: 57.2554
- type: nauc_recall_at_3_max
value: 37.2587
- type: nauc_recall_at_3_std
value: 0.3079
- type: nauc_recall_at_3_diff1
value: 43.9748
- type: nauc_recall_at_5_max
value: 35.9724
- type: nauc_recall_at_5_std
value: -0.4043
- type: nauc_recall_at_5_diff1
value: 43.2259
- type: nauc_recall_at_10_max
value: 36.5054
- type: nauc_recall_at_10_std
value: 6.050599999999999
- type: nauc_recall_at_10_diff1
value: 38.3286
- type: nauc_recall_at_20_max
value: 39.2692
- type: nauc_recall_at_20_std
value: 12.623599999999998
- type: nauc_recall_at_20_diff1
value: 39.7523
- type: nauc_recall_at_100_max
value: 44.7179
- type: nauc_recall_at_100_std
value: 24.0593
- type: nauc_recall_at_100_diff1
value: 37.7336
- type: nauc_recall_at_1000_max
value: 53.7754
- type: nauc_recall_at_1000_std
value: 45.2282
- type: nauc_recall_at_1000_diff1
value: 37.3474
- type: nauc_precision_at_1_max
value: 44.6488
- type: nauc_precision_at_1_std
value: 4.1984
- type: nauc_precision_at_1_diff1
value: 56.6913
- type: nauc_precision_at_3_max
value: 39.659499999999994
- type: nauc_precision_at_3_std
value: 4.1345
- type: nauc_precision_at_3_diff1
value: 34.713300000000004
- type: nauc_precision_at_5_max
value: 36.207
- type: nauc_precision_at_5_std
value: 3.4388
- type: nauc_precision_at_5_diff1
value: 29.0749
- type: nauc_precision_at_10_max
value: 32.9216
- type: nauc_precision_at_10_std
value: 7.0937
- type: nauc_precision_at_10_diff1
value: 18.8157
- type: nauc_precision_at_20_max
value: 29.9111
- type: nauc_precision_at_20_std
value: 11.0234
- type: nauc_precision_at_20_diff1
value: 12.5669
- type: nauc_precision_at_100_max
value: 16.6883
- type: nauc_precision_at_100_std
value: 11.183300000000001
- type: nauc_precision_at_100_diff1
value: -1.4782
- type: nauc_precision_at_1000_max
value: 1.7108999999999999
- type: nauc_precision_at_1000_std
value: 2.5909999999999997
- type: nauc_precision_at_1000_diff1
value: -11.336300000000001
- type: nauc_mrr_at_1_max
value: 44.6488
- type: nauc_mrr_at_1_std
value: 4.1984
- type: nauc_mrr_at_1_diff1
value: 56.6913
- type: nauc_mrr_at_3_max
value: 43.7515
- type: nauc_mrr_at_3_std
value: 3.3159
- type: nauc_mrr_at_3_diff1
value: 51.294399999999996
- type: nauc_mrr_at_5_max
value: 43.5602
- type: nauc_mrr_at_5_std
value: 2.8770000000000002
- type: nauc_mrr_at_5_diff1
value: 51.3629
- type: nauc_mrr_at_10_max
value: 43.743900000000004
- type: nauc_mrr_at_10_std
value: 3.7014
- type: nauc_mrr_at_10_diff1
value: 50.9399
- type: nauc_mrr_at_20_max
value: 43.736000000000004
- type: nauc_mrr_at_20_std
value: 3.8751
- type: nauc_mrr_at_20_diff1
value: 51.056400000000004
- type: nauc_mrr_at_100_max
value: 43.749
- type: nauc_mrr_at_100_std
value: 3.9109
- type: nauc_mrr_at_100_diff1
value: 51.12989999999999
- type: nauc_mrr_at_1000_max
value: 43.7543
- type: nauc_mrr_at_1000_std
value: 3.9046
- type: nauc_mrr_at_1000_diff1
value: 51.144999999999996
- type: main_score
value: 48.57
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: ndcg_at_1
value: 35.375
- type: ndcg_at_3
value: 41.804
- type: ndcg_at_5
value: 43.954
- type: ndcg_at_10
value: 46.46
- type: ndcg_at_20
value: 48.881
- type: ndcg_at_100
value: 52.425
- type: ndcg_at_1000
value: 54.373000000000005
- type: map_at_1
value: 29.341
- type: map_at_3
value: 37.120999999999995
- type: map_at_5
value: 38.734
- type: map_at_10
value: 40.158
- type: map_at_20
value: 41.14
- type: map_at_100
value: 42.037
- type: map_at_1000
value: 42.271
- type: recall_at_1
value: 29.341
- type: recall_at_3
value: 44.542
- type: recall_at_5
value: 50.193
- type: recall_at_10
value: 57.894
- type: recall_at_20
value: 66.841
- type: recall_at_100
value: 84.343
- type: recall_at_1000
value: 96.643
- type: precision_at_1
value: 35.375
- type: precision_at_3
value: 20.026
- type: precision_at_5
value: 14.466000000000001
- type: precision_at_10
value: 9.110999999999999
- type: precision_at_20
value: 5.899
- type: precision_at_100
value: 1.8499999999999999
- type: precision_at_1000
value: 0.255
- type: mrr_at_1
value: 35.375499999999995
- type: mrr_at_3
value: 42.1937
- type: mrr_at_5
value: 43.834
- type: mrr_at_10
value: 44.7625
- type: mrr_at_20
value: 45.3485
- type: mrr_at_100
value: 45.7111
- type: mrr_at_1000
value: 45.7566
- type: nauc_ndcg_at_1_max
value: 23.5952
- type: nauc_ndcg_at_1_std
value: 5.8244
- type: nauc_ndcg_at_1_diff1
value: 53.12
- type: nauc_ndcg_at_3_max
value: 21.2669
- type: nauc_ndcg_at_3_std
value: 8.3372
- type: nauc_ndcg_at_3_diff1
value: 47.216
- type: nauc_ndcg_at_5_max
value: 21.0726
- type: nauc_ndcg_at_5_std
value: 10.0045
- type: nauc_ndcg_at_5_diff1
value: 47.5922
- type: nauc_ndcg_at_10_max
value: 21.035999999999998
- type: nauc_ndcg_at_10_std
value: 8.706999999999999
- type: nauc_ndcg_at_10_diff1
value: 47.7902
- type: nauc_ndcg_at_20_max
value: 21.0952
- type: nauc_ndcg_at_20_std
value: 9.762
- type: nauc_ndcg_at_20_diff1
value: 47.6359
- type: nauc_ndcg_at_100_max
value: 22.5778
- type: nauc_ndcg_at_100_std
value: 11.4197
- type: nauc_ndcg_at_100_diff1
value: 48.450500000000005
- type: nauc_ndcg_at_1000_max
value: 22.6857
- type: nauc_ndcg_at_1000_std
value: 10.741100000000001
- type: nauc_ndcg_at_1000_diff1
value: 47.876400000000004
- type: nauc_map_at_1_max
value: 16.4685
- type: nauc_map_at_1_std
value: 0.6203000000000001
- type: nauc_map_at_1_diff1
value: 55.691
- type: nauc_map_at_3_max
value: 19.2659
- type: nauc_map_at_3_std
value: 3.5485999999999995
- type: nauc_map_at_3_diff1
value: 50.8056
- type: nauc_map_at_5_max
value: 19.784299999999998
- type: nauc_map_at_5_std
value: 4.7257
- type: nauc_map_at_5_diff1
value: 50.6115
- type: nauc_map_at_10_max
value: 20.1772
- type: nauc_map_at_10_std
value: 4.8205
- type: nauc_map_at_10_diff1
value: 50.545399999999994
- type: nauc_map_at_20_max
value: 20.4281
- type: nauc_map_at_20_std
value: 5.945799999999999
- type: nauc_map_at_20_diff1
value: 50.2247
- type: nauc_map_at_100_max
value: 20.697599999999998
- type: nauc_map_at_100_std
value: 7.3290999999999995
- type: nauc_map_at_100_diff1
value: 49.9734
- type: nauc_map_at_1000_max
value: 20.686
- type: nauc_map_at_1000_std
value: 7.8218
- type: nauc_map_at_1000_diff1
value: 49.7498
- type: nauc_recall_at_1_max
value: 16.4685
- type: nauc_recall_at_1_std
value: 0.6203000000000001
- type: nauc_recall_at_1_diff1
value: 55.691
- type: nauc_recall_at_3_max
value: 17.5274
- type: nauc_recall_at_3_std
value: 5.2619
- type: nauc_recall_at_3_diff1
value: 44.4629
- type: nauc_recall_at_5_max
value: 18.3596
- type: nauc_recall_at_5_std
value: 9.3709
- type: nauc_recall_at_5_diff1
value: 44.800200000000004
- type: nauc_recall_at_10_max
value: 17.515
- type: nauc_recall_at_10_std
value: 7.080400000000001
- type: nauc_recall_at_10_diff1
value: 43.181799999999996
- type: nauc_recall_at_20_max
value: 17.605999999999998
- type: nauc_recall_at_20_std
value: 11.745700000000001
- type: nauc_recall_at_20_diff1
value: 41.4216
- type: nauc_recall_at_100_max
value: 27.6642
- type: nauc_recall_at_100_std
value: 31.652
- type: nauc_recall_at_100_diff1
value: 43.5986
- type: nauc_recall_at_1000_max
value: 48.9638
- type: nauc_recall_at_1000_std
value: 51.2076
- type: nauc_recall_at_1000_diff1
value: 29.369899999999998
- type: nauc_precision_at_1_max
value: 23.5952
- type: nauc_precision_at_1_std
value: 5.8244
- type: nauc_precision_at_1_diff1
value: 53.12
- type: nauc_precision_at_3_max
value: 23.935100000000002
- type: nauc_precision_at_3_std
value: 16.5914
- type: nauc_precision_at_3_diff1
value: 28.4103
- type: nauc_precision_at_5_max
value: 22.6905
- type: nauc_precision_at_5_std
value: 23.6646
- type: nauc_precision_at_5_diff1
value: 19.524
- type: nauc_precision_at_10_max
value: 22.1447
- type: nauc_precision_at_10_std
value: 25.2899
- type: nauc_precision_at_10_diff1
value: 10.0271
- type: nauc_precision_at_20_max
value: 16.7291
- type: nauc_precision_at_20_std
value: 34.3311
- type: nauc_precision_at_20_diff1
value: -3.1237999999999997
- type: nauc_precision_at_100_max
value: 9.431000000000001
- type: nauc_precision_at_100_std
value: 46.103300000000004
- type: nauc_precision_at_100_diff1
value: -17.1284
- type: nauc_precision_at_1000_max
value: 4.9976
- type: nauc_precision_at_1000_std
value: 47.3664
- type: nauc_precision_at_1000_diff1
value: -24.0582
- type: nauc_mrr_at_1_max
value: 23.5952
- type: nauc_mrr_at_1_std
value: 5.8244
- type: nauc_mrr_at_1_diff1
value: 53.12
- type: nauc_mrr_at_3_max
value: 23.0702
- type: nauc_mrr_at_3_std
value: 8.0486
- type: nauc_mrr_at_3_diff1
value: 48.1321
- type: nauc_mrr_at_5_max
value: 23.2691
- type: nauc_mrr_at_5_std
value: 9.1019
- type: nauc_mrr_at_5_diff1
value: 47.9119
- type: nauc_mrr_at_10_max
value: 23.2747
- type: nauc_mrr_at_10_std
value: 8.8027
- type: nauc_mrr_at_10_diff1
value: 48.0293
- type: nauc_mrr_at_20_max
value: 23.1632
- type: nauc_mrr_at_20_std
value: 8.8043
- type: nauc_mrr_at_20_diff1
value: 47.950900000000004
- type: nauc_mrr_at_100_max
value: 23.2738
- type: nauc_mrr_at_100_std
value: 8.963899999999999
- type: nauc_mrr_at_100_diff1
value: 48.0468
- type: nauc_mrr_at_1000_max
value: 23.2584
- type: nauc_mrr_at_1000_std
value: 8.9357
- type: nauc_mrr_at_1000_diff1
value: 48.0548
- type: main_score
value: 46.46
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: ndcg_at_1
value: 28.835
- type: ndcg_at_3
value: 34.629
- type: ndcg_at_5
value: 36.906
- type: ndcg_at_10
value: 39.722
- type: ndcg_at_20
value: 41.324
- type: ndcg_at_100
value: 44.856
- type: ndcg_at_1000
value: 47.174
- type: map_at_1
value: 26.001
- type: map_at_3
value: 31.916
- type: map_at_5
value: 33.303
- type: map_at_10
value: 34.594
- type: map_at_20
value: 35.081
- type: map_at_100
value: 35.592
- type: map_at_1000
value: 35.693000000000005
- type: recall_at_1
value: 26.001
- type: recall_at_3
value: 39.144
- type: recall_at_5
value: 44.572
- type: recall_at_10
value: 52.856
- type: recall_at_20
value: 58.78
- type: recall_at_100
value: 76.828
- type: recall_at_1000
value: 93.93
- type: precision_at_1
value: 28.835
- type: precision_at_3
value: 14.726
- type: precision_at_5
value: 10.24
- type: precision_at_10
value: 6.192
- type: precision_at_20
value: 3.4939999999999998
- type: precision_at_100
value: 0.946
- type: precision_at_1000
value: 0.126
- type: mrr_at_1
value: 28.835499999999996
- type: mrr_at_3
value: 35.0277
- type: mrr_at_5
value: 36.3401
- type: mrr_at_10
value: 37.359500000000004
- type: mrr_at_20
value: 37.7675
- type: mrr_at_100
value: 38.2184
- type: mrr_at_1000
value: 38.287
- type: nauc_ndcg_at_1_max
value: 25.1156
- type: nauc_ndcg_at_1_std
value: 5.8615
- type: nauc_ndcg_at_1_diff1
value: 49.8287
- type: nauc_ndcg_at_3_max
value: 25.605299999999996
- type: nauc_ndcg_at_3_std
value: 6.9969
- type: nauc_ndcg_at_3_diff1
value: 45.1011
- type: nauc_ndcg_at_5_max
value: 25.2231
- type: nauc_ndcg_at_5_std
value: 6.674099999999999
- type: nauc_ndcg_at_5_diff1
value: 42.7723
- type: nauc_ndcg_at_10_max
value: 24.404899999999998
- type: nauc_ndcg_at_10_std
value: 6.3734
- type: nauc_ndcg_at_10_diff1
value: 41.9908
- type: nauc_ndcg_at_20_max
value: 23.9259
- type: nauc_ndcg_at_20_std
value: 6.749099999999999
- type: nauc_ndcg_at_20_diff1
value: 41.2996
- type: nauc_ndcg_at_100_max
value: 24.0414
- type: nauc_ndcg_at_100_std
value: 9.694700000000001
- type: nauc_ndcg_at_100_diff1
value: 41.0428
- type: nauc_ndcg_at_1000_max
value: 24.8501
- type: nauc_ndcg_at_1000_std
value: 8.7988
- type: nauc_ndcg_at_1000_diff1
value: 42.290499999999994
- type: nauc_map_at_1_max
value: 24.038999999999998
- type: nauc_map_at_1_std
value: 4.3147
- type: nauc_map_at_1_diff1
value: 51.1896
- type: nauc_map_at_3_max
value: 24.8903
- type: nauc_map_at_3_std
value: 5.8555
- type: nauc_map_at_3_diff1
value: 46.6276
- type: nauc_map_at_5_max
value: 24.9985
- type: nauc_map_at_5_std
value: 5.8602
- type: nauc_map_at_5_diff1
value: 45.2278
- type: nauc_map_at_10_max
value: 24.7287
- type: nauc_map_at_10_std
value: 5.7734
- type: nauc_map_at_10_diff1
value: 44.8438
- type: nauc_map_at_20_max
value: 24.698
- type: nauc_map_at_20_std
value: 5.86
- type: nauc_map_at_20_diff1
value: 44.635200000000005
- type: nauc_map_at_100_max
value: 24.726100000000002
- type: nauc_map_at_100_std
value: 6.307
- type: nauc_map_at_100_diff1
value: 44.587700000000005
- type: nauc_map_at_1000_max
value: 24.7303
- type: nauc_map_at_1000_std
value: 6.271400000000001
- type: nauc_map_at_1000_diff1
value: 44.6122
- type: nauc_recall_at_1_max
value: 24.038999999999998
- type: nauc_recall_at_1_std
value: 4.3147
- type: nauc_recall_at_1_diff1
value: 51.1896
- type: nauc_recall_at_3_max
value: 24.6154
- type: nauc_recall_at_3_std
value: 7.6124
- type: nauc_recall_at_3_diff1
value: 41.077999999999996
- type: nauc_recall_at_5_max
value: 23.8232
- type: nauc_recall_at_5_std
value: 7.3541
- type: nauc_recall_at_5_diff1
value: 35.070499999999996
- type: nauc_recall_at_10_max
value: 21.084500000000002
- type: nauc_recall_at_10_std
value: 6.592199999999999
- type: nauc_recall_at_10_diff1
value: 32.1365
- type: nauc_recall_at_20_max
value: 18.6663
- type: nauc_recall_at_20_std
value: 8.094899999999999
- type: nauc_recall_at_20_diff1
value: 29.124100000000002
- type: nauc_recall_at_100_max
value: 16.6248
- type: nauc_recall_at_100_std
value: 29.871199999999998
- type: nauc_recall_at_100_diff1
value: 22.16
- type: nauc_recall_at_1000_max
value: 39.4327
- type: nauc_recall_at_1000_std
value: 48.1674
- type: nauc_recall_at_1000_diff1
value: 35.6011
- type: nauc_precision_at_1_max
value: 25.1156
- type: nauc_precision_at_1_std
value: 5.8615
- type: nauc_precision_at_1_diff1
value: 49.8287
- type: nauc_precision_at_3_max
value: 27.020100000000003
- type: nauc_precision_at_3_std
value: 9.533800000000001
- type: nauc_precision_at_3_diff1
value: 37.6314
- type: nauc_precision_at_5_max
value: 26.1386
- type: nauc_precision_at_5_std
value: 8.0008
- type: nauc_precision_at_5_diff1
value: 29.5792
- type: nauc_precision_at_10_max
value: 21.7676
- type: nauc_precision_at_10_std
value: 8.2431
- type: nauc_precision_at_10_diff1
value: 22.1029
- type: nauc_precision_at_20_max
value: 19.3305
- type: nauc_precision_at_20_std
value: 8.7156
- type: nauc_precision_at_20_diff1
value: 16.4788
- type: nauc_precision_at_100_max
value: 11.6811
- type: nauc_precision_at_100_std
value: 20.169
- type: nauc_precision_at_100_diff1
value: -0.8449
- type: nauc_precision_at_1000_max
value: -5.1109
- type: nauc_precision_at_1000_std
value: 3.2081999999999997
- type: nauc_precision_at_1000_diff1
value: -23.8473
- type: nauc_mrr_at_1_max
value: 25.1156
- type: nauc_mrr_at_1_std
value: 5.8615
- type: nauc_mrr_at_1_diff1
value: 49.8287
- type: nauc_mrr_at_3_max
value: 26.009999999999998
- type: nauc_mrr_at_3_std
value: 7.795100000000001
- type: nauc_mrr_at_3_diff1
value: 46.0223
- type: nauc_mrr_at_5_max
value: 25.8328
- type: nauc_mrr_at_5_std
value: 7.4602
- type: nauc_mrr_at_5_diff1
value: 44.6238
- type: nauc_mrr_at_10_max
value: 25.5241
- type: nauc_mrr_at_10_std
value: 7.071199999999999
- type: nauc_mrr_at_10_diff1
value: 44.3654
- type: nauc_mrr_at_20_max
value: 25.287100000000002
- type: nauc_mrr_at_20_std
value: 7.194100000000001
- type: nauc_mrr_at_20_diff1
value: 44.2009
- type: nauc_mrr_at_100_max
value: 25.3365
- type: nauc_mrr_at_100_std
value: 7.513400000000001
- type: nauc_mrr_at_100_diff1
value: 44.2267
- type: nauc_mrr_at_1000_max
value: 25.361
- type: nauc_mrr_at_1000_std
value: 7.492500000000001
- type: nauc_mrr_at_1000_diff1
value: 44.2745
- type: main_score
value: 39.722
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: ndcg_at_1
value: 47.818
- type: ndcg_at_3
value: 41.184
- type: ndcg_at_5
value: 43.733
- type: ndcg_at_10
value: 48.439
- type: ndcg_at_20
value: 51.55800000000001
- type: ndcg_at_100
value: 55.562
- type: ndcg_at_1000
value: 57.885
- type: map_at_1
value: 20.846
- type: map_at_3
value: 30.997999999999998
- type: map_at_5
value: 34.524
- type: map_at_10
value: 37.427
- type: map_at_20
value: 38.761
- type: map_at_100
value: 39.753
- type: map_at_1000
value: 39.910000000000004
- type: recall_at_1
value: 20.846
- type: recall_at_3
value: 36.958999999999996
- type: recall_at_5
value: 45.079
- type: recall_at_10
value: 55.295
- type: recall_at_20
value: 63.815
- type: recall_at_100
value: 78.456
- type: recall_at_1000
value: 91.106
- type: precision_at_1
value: 47.818
- type: precision_at_3
value: 31.422
- type: precision_at_5
value: 23.909
- type: precision_at_10
value: 15.14
- type: precision_at_20
value: 8.977
- type: precision_at_100
value: 2.294
- type: precision_at_1000
value: 0.27399999999999997
- type: mrr_at_1
value: 47.8176
- type: mrr_at_3
value: 58.165
- type: mrr_at_5
value: 59.764399999999995
- type: mrr_at_10
value: 60.663999999999994
- type: mrr_at_20
value: 61.0341
- type: mrr_at_100
value: 61.15540000000001
- type: mrr_at_1000
value: 61.168
- type: nauc_ndcg_at_1_max
value: 42.0574
- type: nauc_ndcg_at_1_std
value: 15.0412
- type: nauc_ndcg_at_1_diff1
value: 38.8542
- type: nauc_ndcg_at_3_max
value: 45.5116
- type: nauc_ndcg_at_3_std
value: 17.8474
- type: nauc_ndcg_at_3_diff1
value: 30.4299
- type: nauc_ndcg_at_5_max
value: 47.5315
- type: nauc_ndcg_at_5_std
value: 21.577199999999998
- type: nauc_ndcg_at_5_diff1
value: 29.9137
- type: nauc_ndcg_at_10_max
value: 49.6024
- type: nauc_ndcg_at_10_std
value: 26.744400000000002
- type: nauc_ndcg_at_10_diff1
value: 28.2074
- type: nauc_ndcg_at_20_max
value: 48.8087
- type: nauc_ndcg_at_20_std
value: 27.815800000000003
- type: nauc_ndcg_at_20_diff1
value: 27.843200000000003
- type: nauc_ndcg_at_100_max
value: 49.2657
- type: nauc_ndcg_at_100_std
value: 28.958299999999998
- type: nauc_ndcg_at_100_diff1
value: 28.2285
- type: nauc_ndcg_at_1000_max
value: 49.8
- type: nauc_ndcg_at_1000_std
value: 28.3402
- type: nauc_ndcg_at_1000_diff1
value: 29.149399999999996
- type: nauc_map_at_1_max
value: 43.4717
- type: nauc_map_at_1_std
value: 7.5933
- type: nauc_map_at_1_diff1
value: 43.8334
- type: nauc_map_at_3_max
value: 45.1549
- type: nauc_map_at_3_std
value: 13.7322
- type: nauc_map_at_3_diff1
value: 32.893499999999996
- type: nauc_map_at_5_max
value: 46.0483
- type: nauc_map_at_5_std
value: 17.3988
- type: nauc_map_at_5_diff1
value: 30.7728
- type: nauc_map_at_10_max
value: 47.5898
- type: nauc_map_at_10_std
value: 21.0419
- type: nauc_map_at_10_diff1
value: 29.6458
- type: nauc_map_at_20_max
value: 47.362500000000004
- type: nauc_map_at_20_std
value: 21.7003
- type: nauc_map_at_20_diff1
value: 29.510199999999998
- type: nauc_map_at_100_max
value: 47.4886
- type: nauc_map_at_100_std
value: 22.0129
- type: nauc_map_at_100_diff1
value: 29.5581
- type: nauc_map_at_1000_max
value: 47.523700000000005
- type: nauc_map_at_1000_std
value: 22.0011
- type: nauc_map_at_1000_diff1
value: 29.5994
- type: nauc_recall_at_1_max
value: 43.4717
- type: nauc_recall_at_1_std
value: 7.5933
- type: nauc_recall_at_1_diff1
value: 43.8334
- type: nauc_recall_at_3_max
value: 44.0599
- type: nauc_recall_at_3_std
value: 15.920000000000002
- type: nauc_recall_at_3_diff1
value: 26.1615
- type: nauc_recall_at_5_max
value: 44.715500000000006
- type: nauc_recall_at_5_std
value: 22.4972
- type: nauc_recall_at_5_diff1
value: 22.8433
- type: nauc_recall_at_10_max
value: 46.716499999999996
- type: nauc_recall_at_10_std
value: 32.6123
- type: nauc_recall_at_10_diff1
value: 17.7149
- type: nauc_recall_at_20_max
value: 43.5173
- type: nauc_recall_at_20_std
value: 35.7574
- type: nauc_recall_at_20_diff1
value: 15.4727
- type: nauc_recall_at_100_max
value: 45.2425
- type: nauc_recall_at_100_std
value: 44.2578
- type: nauc_recall_at_100_diff1
value: 14.2627
- type: nauc_recall_at_1000_max
value: 57.903800000000004
- type: nauc_recall_at_1000_std
value: 56.775
- type: nauc_recall_at_1000_diff1
value: 18.5075
- type: nauc_precision_at_1_max
value: 42.0574
- type: nauc_precision_at_1_std
value: 15.0412
- type: nauc_precision_at_1_diff1
value: 38.8542
- type: nauc_precision_at_3_max
value: 35.791000000000004
- type: nauc_precision_at_3_std
value: 23.7089
- type: nauc_precision_at_3_diff1
value: 12.116100000000001
- type: nauc_precision_at_5_max
value: 32.874700000000004
- type: nauc_precision_at_5_std
value: 29.5353
- type: nauc_precision_at_5_diff1
value: 6.1242
- type: nauc_precision_at_10_max
value: 29.019000000000002
- type: nauc_precision_at_10_std
value: 34.8923
- type: nauc_precision_at_10_diff1
value: -1.0564
- type: nauc_precision_at_20_max
value: 19.5603
- type: nauc_precision_at_20_std
value: 32.8097
- type: nauc_precision_at_20_diff1
value: -5.8062000000000005
- type: nauc_precision_at_100_max
value: 7.152799999999999
- type: nauc_precision_at_100_std
value: 27.925299999999996
- type: nauc_precision_at_100_diff1
value: -12.053899999999999
- type: nauc_precision_at_1000_max
value: -6.3346
- type: nauc_precision_at_1000_std
value: 15.254599999999998
- type: nauc_precision_at_1000_diff1
value: -17.0105
- type: nauc_mrr_at_1_max
value: 42.0574
- type: nauc_mrr_at_1_std
value: 15.0412
- type: nauc_mrr_at_1_diff1
value: 38.8542
- type: nauc_mrr_at_3_max
value: 46.7432
- type: nauc_mrr_at_3_std
value: 21.6624
- type: nauc_mrr_at_3_diff1
value: 34.3397
- type: nauc_mrr_at_5_max
value: 47.1175
- type: nauc_mrr_at_5_std
value: 22.811
- type: nauc_mrr_at_5_diff1
value: 34.9539
- type: nauc_mrr_at_10_max
value: 47.172599999999996
- type: nauc_mrr_at_10_std
value: 22.9886
- type: nauc_mrr_at_10_diff1
value: 34.7462
- type: nauc_mrr_at_20_max
value: 46.9932
- type: nauc_mrr_at_20_std
value: 22.8034
- type: nauc_mrr_at_20_diff1
value: 34.8125
- type: nauc_mrr_at_100_max
value: 46.9459
- type: nauc_mrr_at_100_std
value: 22.7237
- type: nauc_mrr_at_100_diff1
value: 34.8654
- type: nauc_mrr_at_1000_max
value: 46.935
- type: nauc_mrr_at_1000_std
value: 22.7019
- type: nauc_mrr_at_1000_diff1
value: 34.867
- type: main_score
value: 48.439
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: ndcg_at_1
value: 64.5
- type: ndcg_at_3
value: 55.836
- type: ndcg_at_5
value: 54.459
- type: ndcg_at_10
value: 52.476
- type: ndcg_at_20
value: 51.388999999999996
- type: ndcg_at_100
value: 57.477000000000004
- type: ndcg_at_1000
value: 64.165
- type: map_at_1
value: 10.067
- type: map_at_3
value: 16.348
- type: map_at_5
value: 20.202
- type: map_at_10
value: 24.938
- type: map_at_20
value: 29.586000000000002
- type: map_at_100
value: 37.057
- type: map_at_1000
value: 39.224
- type: recall_at_1
value: 10.067
- type: recall_at_3
value: 17.62
- type: recall_at_5
value: 23.162
- type: recall_at_10
value: 30.769999999999996
- type: recall_at_20
value: 39.719
- type: recall_at_100
value: 64.159
- type: recall_at_1000
value: 86.166
- type: precision_at_1
value: 76.75
- type: precision_at_3
value: 59.833000000000006
- type: precision_at_5
value: 53.75
- type: precision_at_10
value: 43.375
- type: precision_at_20
value: 33.025
- type: precision_at_100
value: 14.011999999999999
- type: precision_at_1000
value: 2.654
- type: mrr_at_1
value: 76.75
- type: mrr_at_3
value: 81.54169999999999
- type: mrr_at_5
value: 82.42920000000001
- type: mrr_at_10
value: 82.8224
- type: mrr_at_20
value: 82.90859999999999
- type: mrr_at_100
value: 82.99419999999999
- type: mrr_at_1000
value: 82.99990000000001
- type: nauc_ndcg_at_1_max
value: 45.8251
- type: nauc_ndcg_at_1_std
value: 26.468799999999998
- type: nauc_ndcg_at_1_diff1
value: 44.3761
- type: nauc_ndcg_at_3_max
value: 38.7362
- type: nauc_ndcg_at_3_std
value: 24.956300000000002
- type: nauc_ndcg_at_3_diff1
value: 24.4135
- type: nauc_ndcg_at_5_max
value: 39.9097
- type: nauc_ndcg_at_5_std
value: 24.9363
- type: nauc_ndcg_at_5_diff1
value: 24.607200000000002
- type: nauc_ndcg_at_10_max
value: 38.8691
- type: nauc_ndcg_at_10_std
value: 22.1204
- type: nauc_ndcg_at_10_diff1
value: 24.0339
- type: nauc_ndcg_at_20_max
value: 38.9902
- type: nauc_ndcg_at_20_std
value: 19.3997
- type: nauc_ndcg_at_20_diff1
value: 26.8911
- type: nauc_ndcg_at_100_max
value: 45.138600000000004
- type: nauc_ndcg_at_100_std
value: 28.6063
- type: nauc_ndcg_at_100_diff1
value: 32.5846
- type: nauc_ndcg_at_1000_max
value: 48.2813
- type: nauc_ndcg_at_1000_std
value: 37.6229
- type: nauc_ndcg_at_1000_diff1
value: 31.184800000000003
- type: nauc_map_at_1_max
value: 1.8275
- type: nauc_map_at_1_std
value: -23.8658
- type: nauc_map_at_1_diff1
value: 31.9071
- type: nauc_map_at_3_max
value: 8.081900000000001
- type: nauc_map_at_3_std
value: -20.647
- type: nauc_map_at_3_diff1
value: 26.509100000000004
- type: nauc_map_at_5_max
value: 12.9364
- type: nauc_map_at_5_std
value: -16.4111
- type: nauc_map_at_5_diff1
value: 26.3042
- type: nauc_map_at_10_max
value: 16.9186
- type: nauc_map_at_10_std
value: -10.174999999999999
- type: nauc_map_at_10_diff1
value: 24.0309
- type: nauc_map_at_20_max
value: 23.0241
- type: nauc_map_at_20_std
value: -0.5674
- type: nauc_map_at_20_diff1
value: 23.6815
- type: nauc_map_at_100_max
value: 30.620599999999996
- type: nauc_map_at_100_std
value: 18.436700000000002
- type: nauc_map_at_100_diff1
value: 22.7765
- type: nauc_map_at_1000_max
value: 31.1444
- type: nauc_map_at_1000_std
value: 20.9294
- type: nauc_map_at_1000_diff1
value: 21.1134
- type: nauc_recall_at_1_max
value: 1.8275
- type: nauc_recall_at_1_std
value: -23.8658
- type: nauc_recall_at_1_diff1
value: 31.9071
- type: nauc_recall_at_3_max
value: 4.6672
- type: nauc_recall_at_3_std
value: -22.2615
- type: nauc_recall_at_3_diff1
value: 21.9773
- type: nauc_recall_at_5_max
value: 9.882399999999999
- type: nauc_recall_at_5_std
value: -18.6467
- type: nauc_recall_at_5_diff1
value: 21.9263
- type: nauc_recall_at_10_max
value: 13.0783
- type: nauc_recall_at_10_std
value: -13.8897
- type: nauc_recall_at_10_diff1
value: 20.0473
- type: nauc_recall_at_20_max
value: 21.4983
- type: nauc_recall_at_20_std
value: -3.9253000000000005
- type: nauc_recall_at_20_diff1
value: 21.8969
- type: nauc_recall_at_100_max
value: 36.0702
- type: nauc_recall_at_100_std
value: 25.007800000000003
- type: nauc_recall_at_100_diff1
value: 26.6674
- type: nauc_recall_at_1000_max
value: 46.604
- type: nauc_recall_at_1000_std
value: 52.5305
- type: nauc_recall_at_1000_diff1
value: 31.8907
- type: nauc_precision_at_1_max
value: 56.293000000000006
- type: nauc_precision_at_1_std
value: 29.729699999999998
- type: nauc_precision_at_1_diff1
value: 55.167500000000004
- type: nauc_precision_at_3_max
value: 37.712
- type: nauc_precision_at_3_std
value: 37.155
- type: nauc_precision_at_3_diff1
value: 8.8632
- type: nauc_precision_at_5_max
value: 35.133399999999995
- type: nauc_precision_at_5_std
value: 38.5708
- type: nauc_precision_at_5_diff1
value: 3.6889
- type: nauc_precision_at_10_max
value: 26.4835
- type: nauc_precision_at_10_std
value: 38.136900000000004
- type: nauc_precision_at_10_diff1
value: -5.988099999999999
- type: nauc_precision_at_20_max
value: 24.3223
- type: nauc_precision_at_20_std
value: 40.487
- type: nauc_precision_at_20_diff1
value: -6.2481
- type: nauc_precision_at_100_max
value: 14.5761
- type: nauc_precision_at_100_std
value: 38.6962
- type: nauc_precision_at_100_diff1
value: -10.8208
- type: nauc_precision_at_1000_max
value: -7.7383999999999995
- type: nauc_precision_at_1000_std
value: 1.3077
- type: nauc_precision_at_1000_diff1
value: -23.3553
- type: nauc_mrr_at_1_max
value: 56.293000000000006
- type: nauc_mrr_at_1_std
value: 29.729699999999998
- type: nauc_mrr_at_1_diff1
value: 55.167500000000004
- type: nauc_mrr_at_3_max
value: 59.1321
- type: nauc_mrr_at_3_std
value: 35.067
- type: nauc_mrr_at_3_diff1
value: 53.4505
- type: nauc_mrr_at_5_max
value: 59.39149999999999
- type: nauc_mrr_at_5_std
value: 33.6519
- type: nauc_mrr_at_5_diff1
value: 53.7743
- type: nauc_mrr_at_10_max
value: 59.6483
- type: nauc_mrr_at_10_std
value: 33.537
- type: nauc_mrr_at_10_diff1
value: 53.75149999999999
- type: nauc_mrr_at_20_max
value: 59.4701
- type: nauc_mrr_at_20_std
value: 33.3212
- type: nauc_mrr_at_20_diff1
value: 53.690000000000005
- type: nauc_mrr_at_100_max
value: 59.3928
- type: nauc_mrr_at_100_std
value: 33.329
- type: nauc_mrr_at_100_diff1
value: 53.7903
- type: nauc_mrr_at_1000_max
value: 59.391400000000004
- type: nauc_mrr_at_1000_std
value: 33.3331
- type: nauc_mrr_at_1000_diff1
value: 53.797399999999996
- type: main_score
value: 52.476
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 87.72999999999999
- type: f1
value: 83.03399999999999
- type: f1_weighted
value: 88.0873
- type: main_score
value: 87.72999999999999
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: ndcg_at_1
value: 93.849
- type: ndcg_at_3
value: 94.08800000000001
- type: ndcg_at_5
value: 94.459
- type: ndcg_at_10
value: 94.806
- type: ndcg_at_20
value: 95.06
- type: ndcg_at_100
value: 95.338
- type: ndcg_at_1000
value: 95.465
- type: map_at_1
value: 87.21900000000001
- type: map_at_3
value: 92.16
- type: map_at_5
value: 92.607
- type: map_at_10
value: 92.88300000000001
- type: map_at_20
value: 93.009
- type: map_at_100
value: 93.07600000000001
- type: map_at_1000
value: 93.085
- type: recall_at_1
value: 87.21900000000001
- type: recall_at_3
value: 95.082
- type: recall_at_5
value: 96.09299999999999
- type: recall_at_10
value: 97.024
- type: recall_at_20
value: 97.773
- type: recall_at_100
value: 98.901
- type: recall_at_1000
value: 99.629
- type: precision_at_1
value: 93.849
- type: precision_at_3
value: 35.154
- type: precision_at_5
value: 21.557000000000002
- type: precision_at_10
value: 11.031
- type: precision_at_20
value: 5.622
- type: precision_at_100
value: 1.155
- type: precision_at_1000
value: 0.11800000000000001
- type: mrr_at_1
value: 93.8494
- type: mrr_at_3
value: 96.43209999999999
- type: mrr_at_5
value: 96.4914
- type: mrr_at_10
value: 96.5095
- type: mrr_at_20
value: 96.5154
- type: mrr_at_100
value: 96.52340000000001
- type: mrr_at_1000
value: 96.5235
- type: nauc_ndcg_at_1_max
value: 22.8567
- type: nauc_ndcg_at_1_std
value: -28.435
- type: nauc_ndcg_at_1_diff1
value: 83.3685
- type: nauc_ndcg_at_3_max
value: 25.3199
- type: nauc_ndcg_at_3_std
value: -8.9749
- type: nauc_ndcg_at_3_diff1
value: 51.326499999999996
- type: nauc_ndcg_at_5_max
value: 24.3802
- type: nauc_ndcg_at_5_std
value: -8.0152
- type: nauc_ndcg_at_5_diff1
value: 50.26389999999999
- type: nauc_ndcg_at_10_max
value: 25.0331
- type: nauc_ndcg_at_10_std
value: -7.0646
- type: nauc_ndcg_at_10_diff1
value: 50.2641
- type: nauc_ndcg_at_20_max
value: 25.342399999999998
- type: nauc_ndcg_at_20_std
value: -7.000000000000001
- type: nauc_ndcg_at_20_diff1
value: 51.2941
- type: nauc_ndcg_at_100_max
value: 25.7293
- type: nauc_ndcg_at_100_std
value: -7.8615
- type: nauc_ndcg_at_100_diff1
value: 53.529700000000005
- type: nauc_ndcg_at_1000_max
value: 25.565500000000004
- type: nauc_ndcg_at_1000_std
value: -9.2169
- type: nauc_ndcg_at_1000_diff1
value: 54.8992
- type: nauc_map_at_1_max
value: 22.5516
- type: nauc_map_at_1_std
value: -11.0841
- type: nauc_map_at_1_diff1
value: 60.23929999999999
- type: nauc_map_at_3_max
value: 23.7695
- type: nauc_map_at_3_std
value: -8.1368
- type: nauc_map_at_3_diff1
value: 50.648
- type: nauc_map_at_5_max
value: 23.7918
- type: nauc_map_at_5_std
value: -8.023
- type: nauc_map_at_5_diff1
value: 50.8238
- type: nauc_map_at_10_max
value: 24.305
- type: nauc_map_at_10_std
value: -7.5793
- type: nauc_map_at_10_diff1
value: 51.1269
- type: nauc_map_at_20_max
value: 24.458099999999998
- type: nauc_map_at_20_std
value: -7.541399999999999
- type: nauc_map_at_20_diff1
value: 51.4841
- type: nauc_map_at_100_max
value: 24.562
- type: nauc_map_at_100_std
value: -7.6588
- type: nauc_map_at_100_diff1
value: 51.8489
- type: nauc_map_at_1000_max
value: 24.5627
- type: nauc_map_at_1000_std
value: -7.7154
- type: nauc_map_at_1000_diff1
value: 51.9049
- type: nauc_recall_at_1_max
value: 22.5516
- type: nauc_recall_at_1_std
value: -11.0841
- type: nauc_recall_at_1_diff1
value: 60.23929999999999
- type: nauc_recall_at_3_max
value: 25.117
- type: nauc_recall_at_3_std
value: 4.056
- type: nauc_recall_at_3_diff1
value: 28.1752
- type: nauc_recall_at_5_max
value: 24.776999999999997
- type: nauc_recall_at_5_std
value: 9.3384
- type: nauc_recall_at_5_diff1
value: 22.5866
- type: nauc_recall_at_10_max
value: 28.2575
- type: nauc_recall_at_10_std
value: 18.0012
- type: nauc_recall_at_10_diff1
value: 16.8302
- type: nauc_recall_at_20_max
value: 30.4282
- type: nauc_recall_at_20_std
value: 27.020100000000003
- type: nauc_recall_at_20_diff1
value: 12.195
- type: nauc_recall_at_100_max
value: 40.7633
- type: nauc_recall_at_100_std
value: 47.6965
- type: nauc_recall_at_100_diff1
value: 9.647
- type: nauc_recall_at_1000_max
value: 53.4871
- type: nauc_recall_at_1000_std
value: 58.3394
- type: nauc_recall_at_1000_diff1
value: 16.0846
- type: nauc_precision_at_1_max
value: 22.8567
- type: nauc_precision_at_1_std
value: -28.435
- type: nauc_precision_at_1_diff1
value: 83.3685
- type: nauc_precision_at_3_max
value: -7.2411
- type: nauc_precision_at_3_std
value: -0.819
- type: nauc_precision_at_3_diff1
value: -25.5367
- type: nauc_precision_at_5_max
value: -8.8802
- type: nauc_precision_at_5_std
value: 0.4376
- type: nauc_precision_at_5_diff1
value: -26.209100000000003
- type: nauc_precision_at_10_max
value: -7.9997
- type: nauc_precision_at_10_std
value: 1.9876999999999998
- type: nauc_precision_at_10_diff1
value: -24.851200000000002
- type: nauc_precision_at_20_max
value: -7.5102
- type: nauc_precision_at_20_std
value: 2.2863
- type: nauc_precision_at_20_diff1
value: -23.224
- type: nauc_precision_at_100_max
value: -8.2868
- type: nauc_precision_at_100_std
value: 0.9126000000000001
- type: nauc_precision_at_100_diff1
value: -21.0977
- type: nauc_precision_at_1000_max
value: -9.5708
- type: nauc_precision_at_1000_std
value: -2.0409
- type: nauc_precision_at_1000_diff1
value: -18.6371
- type: nauc_mrr_at_1_max
value: 22.8567
- type: nauc_mrr_at_1_std
value: -28.435
- type: nauc_mrr_at_1_diff1
value: 83.3685
- type: nauc_mrr_at_3_max
value: 25.0204
- type: nauc_mrr_at_3_std
value: -29.9704
- type: nauc_mrr_at_3_diff1
value: 79.9868
- type: nauc_mrr_at_5_max
value: 24.4871
- type: nauc_mrr_at_5_std
value: -29.5127
- type: nauc_mrr_at_5_diff1
value: 79.8468
- type: nauc_mrr_at_10_max
value: 24.543400000000002
- type: nauc_mrr_at_10_std
value: -29.243799999999997
- type: nauc_mrr_at_10_diff1
value: 79.8492
- type: nauc_mrr_at_20_max
value: 24.4687
- type: nauc_mrr_at_20_std
value: -29.2678
- type: nauc_mrr_at_20_diff1
value: 79.9428
- type: nauc_mrr_at_100_max
value: 24.43
- type: nauc_mrr_at_100_std
value: -29.3424
- type: nauc_mrr_at_100_diff1
value: 80.1115
- type: nauc_mrr_at_1000_max
value: 24.429699999999997
- type: nauc_mrr_at_1000_std
value: -29.3431
- type: nauc_mrr_at_1000_diff1
value: 80.1118
- type: main_score
value: 94.806
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: ndcg_at_1
value: 59.721999999999994
- type: ndcg_at_3
value: 55.584
- type: ndcg_at_5
value: 56.852000000000004
- type: ndcg_at_10
value: 60.118
- type: ndcg_at_20
value: 62.756
- type: ndcg_at_100
value: 66.434
- type: ndcg_at_1000
value: 67.796
- type: map_at_1
value: 31.213
- type: map_at_3
value: 45.164
- type: map_at_5
value: 48.66
- type: map_at_10
value: 51.637
- type: map_at_20
value: 52.886
- type: map_at_100
value: 53.785000000000004
- type: map_at_1000
value: 53.886
- type: recall_at_1
value: 31.213
- type: recall_at_3
value: 50.105
- type: recall_at_5
value: 57.714
- type: recall_at_10
value: 67.93900000000001
- type: recall_at_20
value: 75.901
- type: recall_at_100
value: 90.424
- type: recall_at_1000
value: 98.24300000000001
- type: precision_at_1
value: 59.721999999999994
- type: precision_at_3
value: 36.986000000000004
- type: precision_at_5
value: 26.883000000000003
- type: precision_at_10
value: 16.558999999999997
- type: precision_at_20
value: 9.468
- type: precision_at_100
value: 2.332
- type: precision_at_1000
value: 0.258
- type: mrr_at_1
value: 59.7222
- type: mrr_at_3
value: 65.1492
- type: mrr_at_5
value: 66.376
- type: mrr_at_10
value: 67.3183
- type: mrr_at_20
value: 67.6241
- type: mrr_at_100
value: 67.8682
- type: mrr_at_1000
value: 67.8792
- type: nauc_ndcg_at_1_max
value: 43.5471
- type: nauc_ndcg_at_1_std
value: -7.1188
- type: nauc_ndcg_at_1_diff1
value: 62.971900000000005
- type: nauc_ndcg_at_3_max
value: 34.357
- type: nauc_ndcg_at_3_std
value: -10.8661
- type: nauc_ndcg_at_3_diff1
value: 47.8167
- type: nauc_ndcg_at_5_max
value: 32.3401
- type: nauc_ndcg_at_5_std
value: -10.3064
- type: nauc_ndcg_at_5_diff1
value: 47.248200000000004
- type: nauc_ndcg_at_10_max
value: 33.4274
- type: nauc_ndcg_at_10_std
value: -6.7193000000000005
- type: nauc_ndcg_at_10_diff1
value: 48.5323
- type: nauc_ndcg_at_20_max
value: 35.2779
- type: nauc_ndcg_at_20_std
value: -4.914700000000001
- type: nauc_ndcg_at_20_diff1
value: 48.952200000000005
- type: nauc_ndcg_at_100_max
value: 35.7555
- type: nauc_ndcg_at_100_std
value: -4.0120000000000005
- type: nauc_ndcg_at_100_diff1
value: 49.684200000000004
- type: nauc_ndcg_at_1000_max
value: 36.836999999999996
- type: nauc_ndcg_at_1000_std
value: -5.0826
- type: nauc_ndcg_at_1000_diff1
value: 49.9576
- type: nauc_map_at_1_max
value: 12.3228
- type: nauc_map_at_1_std
value: -6.8317000000000005
- type: nauc_map_at_1_diff1
value: 52.012499999999996
- type: nauc_map_at_3_max
value: 22.758300000000002
- type: nauc_map_at_3_std
value: -10.415199999999999
- type: nauc_map_at_3_diff1
value: 48.8788
- type: nauc_map_at_5_max
value: 26.2402
- type: nauc_map_at_5_std
value: -9.831199999999999
- type: nauc_map_at_5_diff1
value: 47.2214
- type: nauc_map_at_10_max
value: 28.962
- type: nauc_map_at_10_std
value: -8.847900000000001
- type: nauc_map_at_10_diff1
value: 47.177
- type: nauc_map_at_20_max
value: 30.252699999999997
- type: nauc_map_at_20_std
value: -7.994
- type: nauc_map_at_20_diff1
value: 47.2329
- type: nauc_map_at_100_max
value: 30.621
- type: nauc_map_at_100_std
value: -7.5253
- type: nauc_map_at_100_diff1
value: 47.2401
- type: nauc_map_at_1000_max
value: 30.6957
- type: nauc_map_at_1000_std
value: -7.570200000000001
- type: nauc_map_at_1000_diff1
value: 47.2385
- type: nauc_recall_at_1_max
value: 12.3228
- type: nauc_recall_at_1_std
value: -6.8317000000000005
- type: nauc_recall_at_1_diff1
value: 52.012499999999996
- type: nauc_recall_at_3_max
value: 18.8278
- type: nauc_recall_at_3_std
value: -11.0181
- type: nauc_recall_at_3_diff1
value: 42.853
- type: nauc_recall_at_5_max
value: 20.494699999999998
- type: nauc_recall_at_5_std
value: -10.1645
- type: nauc_recall_at_5_diff1
value: 38.1874
- type: nauc_recall_at_10_max
value: 24.264
- type: nauc_recall_at_10_std
value: -1.4449999999999998
- type: nauc_recall_at_10_diff1
value: 35.9821
- type: nauc_recall_at_20_max
value: 27.699
- type: nauc_recall_at_20_std
value: 5.3731
- type: nauc_recall_at_20_diff1
value: 34.8722
- type: nauc_recall_at_100_max
value: 17.296
- type: nauc_recall_at_100_std
value: 22.0343
- type: nauc_recall_at_100_diff1
value: 31.7347
- type: nauc_recall_at_1000_max
value: 30.353
- type: nauc_recall_at_1000_std
value: 48.0299
- type: nauc_recall_at_1000_diff1
value: 24.395500000000002
- type: nauc_precision_at_1_max
value: 43.5471
- type: nauc_precision_at_1_std
value: -7.1188
- type: nauc_precision_at_1_diff1
value: 62.971900000000005
- type: nauc_precision_at_3_max
value: 39.456599999999995
- type: nauc_precision_at_3_std
value: -6.9102
- type: nauc_precision_at_3_diff1
value: 21.7063
- type: nauc_precision_at_5_max
value: 39.5105
- type: nauc_precision_at_5_std
value: -4.0794999999999995
- type: nauc_precision_at_5_diff1
value: 11.556700000000001
- type: nauc_precision_at_10_max
value: 38.1647
- type: nauc_precision_at_10_std
value: 1.0857999999999999
- type: nauc_precision_at_10_diff1
value: 3.9246000000000003
- type: nauc_precision_at_20_max
value: 38.1511
- type: nauc_precision_at_20_std
value: 5.1094
- type: nauc_precision_at_20_diff1
value: -0.468
- type: nauc_precision_at_100_max
value: 30.9215
- type: nauc_precision_at_100_std
value: 7.5769
- type: nauc_precision_at_100_diff1
value: -8.6537
- type: nauc_precision_at_1000_max
value: 28.324899999999996
- type: nauc_precision_at_1000_std
value: 4.0066999999999995
- type: nauc_precision_at_1000_diff1
value: -13.2833
- type: nauc_mrr_at_1_max
value: 43.5471
- type: nauc_mrr_at_1_std
value: -7.1188
- type: nauc_mrr_at_1_diff1
value: 62.971900000000005
- type: nauc_mrr_at_3_max
value: 44.7938
- type: nauc_mrr_at_3_std
value: -8.1318
- type: nauc_mrr_at_3_diff1
value: 60.6936
- type: nauc_mrr_at_5_max
value: 44.398199999999996
- type: nauc_mrr_at_5_std
value: -8.0999
- type: nauc_mrr_at_5_diff1
value: 60.2421
- type: nauc_mrr_at_10_max
value: 44.7636
- type: nauc_mrr_at_10_std
value: -6.6530000000000005
- type: nauc_mrr_at_10_diff1
value: 60.168
- type: nauc_mrr_at_20_max
value: 44.5856
- type: nauc_mrr_at_20_std
value: -6.786300000000001
- type: nauc_mrr_at_20_diff1
value: 60.224999999999994
- type: nauc_mrr_at_100_max
value: 44.554899999999996
- type: nauc_mrr_at_100_std
value: -6.819400000000001
- type: nauc_mrr_at_100_diff1
value: 60.379400000000004
- type: nauc_mrr_at_1000_max
value: 44.5589
- type: nauc_mrr_at_1000_std
value: -6.8267
- type: nauc_mrr_at_1000_diff1
value: 60.3766
- type: main_score
value: 60.118
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: ndcg_at_1
value: 87.576
- type: ndcg_at_3
value: 73.276
- type: ndcg_at_5
value: 75.717
- type: ndcg_at_10
value: 77.681
- type: ndcg_at_20
value: 78.865
- type: ndcg_at_100
value: 80.42
- type: ndcg_at_1000
value: 81.34
- type: map_at_1
value: 43.788
- type: map_at_3
value: 66.769
- type: map_at_5
value: 68.811
- type: map_at_10
value: 70.06099999999999
- type: map_at_20
value: 70.565
- type: map_at_100
value: 70.901
- type: map_at_1000
value: 70.955
- type: recall_at_1
value: 43.788
- type: recall_at_3
value: 70.27
- type: recall_at_5
value: 75.098
- type: recall_at_10
value: 79.99300000000001
- type: recall_at_20
value: 83.795
- type: recall_at_100
value: 90.621
- type: recall_at_1000
value: 96.577
- type: precision_at_1
value: 87.576
- type: precision_at_3
value: 46.847
- type: precision_at_5
value: 30.038999999999998
- type: precision_at_10
value: 15.998999999999999
- type: precision_at_20
value: 8.379
- type: precision_at_100
value: 1.812
- type: precision_at_1000
value: 0.193
- type: mrr_at_1
value: 87.576
- type: mrr_at_3
value: 90.7968
- type: mrr_at_5
value: 91.135
- type: mrr_at_10
value: 91.3228
- type: mrr_at_20
value: 91.3865
- type: mrr_at_100
value: 91.41709999999999
- type: mrr_at_1000
value: 91.4191
- type: nauc_ndcg_at_1_max
value: 52.9848
- type: nauc_ndcg_at_1_std
value: -7.4415
- type: nauc_ndcg_at_1_diff1
value: 74.138
- type: nauc_ndcg_at_3_max
value: 18.2608
- type: nauc_ndcg_at_3_std
value: 2.8652
- type: nauc_ndcg_at_3_diff1
value: 8.2423
- type: nauc_ndcg_at_5_max
value: 18.7809
- type: nauc_ndcg_at_5_std
value: 5.266
- type: nauc_ndcg_at_5_diff1
value: 8.2429
- type: nauc_ndcg_at_10_max
value: 20.0532
- type: nauc_ndcg_at_10_std
value: 7.7897
- type: nauc_ndcg_at_10_diff1
value: 8.953
- type: nauc_ndcg_at_20_max
value: 20.921
- type: nauc_ndcg_at_20_std
value: 9.1425
- type: nauc_ndcg_at_20_diff1
value: 9.8906
- type: nauc_ndcg_at_100_max
value: 22.0304
- type: nauc_ndcg_at_100_std
value: 10.200099999999999
- type: nauc_ndcg_at_100_diff1
value: 10.8734
- type: nauc_ndcg_at_1000_max
value: 22.4498
- type: nauc_ndcg_at_1000_std
value: 9.0443
- type: nauc_ndcg_at_1000_diff1
value: 11.950800000000001
- type: nauc_map_at_1_max
value: 52.9848
- type: nauc_map_at_1_std
value: -7.4415
- type: nauc_map_at_1_diff1
value: 74.138
- type: nauc_map_at_3_max
value: 14.394599999999999
- type: nauc_map_at_3_std
value: 3.0435
- type: nauc_map_at_3_diff1
value: 2.7293000000000003
- type: nauc_map_at_5_max
value: 15.001000000000001
- type: nauc_map_at_5_std
value: 4.8138
- type: nauc_map_at_5_diff1
value: 3.0212
- type: nauc_map_at_10_max
value: 15.770600000000002
- type: nauc_map_at_10_std
value: 6.1151
- type: nauc_map_at_10_diff1
value: 3.546
- type: nauc_map_at_20_max
value: 16.0909
- type: nauc_map_at_20_std
value: 6.5694
- type: nauc_map_at_20_diff1
value: 3.8928
- type: nauc_map_at_100_max
value: 16.2717
- type: nauc_map_at_100_std
value: 6.7403
- type: nauc_map_at_100_diff1
value: 4.0282
- type: nauc_map_at_1000_max
value: 16.284399999999998
- type: nauc_map_at_1000_std
value: 6.6972000000000005
- type: nauc_map_at_1000_diff1
value: 4.0631
- type: nauc_recall_at_1_max
value: 52.9848
- type: nauc_recall_at_1_std
value: -7.4415
- type: nauc_recall_at_1_diff1
value: 74.138
- type: nauc_recall_at_3_max
value: 11.933
- type: nauc_recall_at_3_std
value: 5.3739
- type: nauc_recall_at_3_diff1
value: -3.9494000000000002
- type: nauc_recall_at_5_max
value: 12.087100000000001
- type: nauc_recall_at_5_std
value: 9.9072
- type: nauc_recall_at_5_diff1
value: -5.1775
- type: nauc_recall_at_10_max
value: 13.96
- type: nauc_recall_at_10_std
value: 16.7674
- type: nauc_recall_at_10_diff1
value: -5.56
- type: nauc_recall_at_20_max
value: 15.7857
- type: nauc_recall_at_20_std
value: 22.8973
- type: nauc_recall_at_20_diff1
value: -4.7106
- type: nauc_recall_at_100_max
value: 20.7038
- type: nauc_recall_at_100_std
value: 38.0023
- type: nauc_recall_at_100_diff1
value: -5.9268
- type: nauc_recall_at_1000_max
value: 31.1577
- type: nauc_recall_at_1000_std
value: 51.5596
- type: nauc_recall_at_1000_diff1
value: -3.6110999999999995
- type: nauc_precision_at_1_max
value: 52.9848
- type: nauc_precision_at_1_std
value: -7.4415
- type: nauc_precision_at_1_diff1
value: 74.138
- type: nauc_precision_at_3_max
value: 11.933
- type: nauc_precision_at_3_std
value: 5.3739
- type: nauc_precision_at_3_diff1
value: -3.9494000000000002
- type: nauc_precision_at_5_max
value: 12.087100000000001
- type: nauc_precision_at_5_std
value: 9.9072
- type: nauc_precision_at_5_diff1
value: -5.1775
- type: nauc_precision_at_10_max
value: 13.96
- type: nauc_precision_at_10_std
value: 16.7674
- type: nauc_precision_at_10_diff1
value: -5.56
- type: nauc_precision_at_20_max
value: 15.7857
- type: nauc_precision_at_20_std
value: 22.8973
- type: nauc_precision_at_20_diff1
value: -4.7106
- type: nauc_precision_at_100_max
value: 20.7038
- type: nauc_precision_at_100_std
value: 38.0023
- type: nauc_precision_at_100_diff1
value: -5.9268
- type: nauc_precision_at_1000_max
value: 31.1577
- type: nauc_precision_at_1000_std
value: 51.5596
- type: nauc_precision_at_1000_diff1
value: -3.6110999999999995
- type: nauc_mrr_at_1_max
value: 52.9848
- type: nauc_mrr_at_1_std
value: -7.4415
- type: nauc_mrr_at_1_diff1
value: 74.138
- type: nauc_mrr_at_3_max
value: 56.448100000000004
- type: nauc_mrr_at_3_std
value: -5.8551
- type: nauc_mrr_at_3_diff1
value: 73.5421
- type: nauc_mrr_at_5_max
value: 56.3061
- type: nauc_mrr_at_5_std
value: -5.8522
- type: nauc_mrr_at_5_diff1
value: 73.78190000000001
- type: nauc_mrr_at_10_max
value: 56.2413
- type: nauc_mrr_at_10_std
value: -5.6768
- type: nauc_mrr_at_10_diff1
value: 73.8144
- type: nauc_mrr_at_20_max
value: 56.1659
- type: nauc_mrr_at_20_std
value: -5.7085
- type: nauc_mrr_at_20_diff1
value: 73.8173
- type: nauc_mrr_at_100_max
value: 56.1251
- type: nauc_mrr_at_100_std
value: -5.7101
- type: nauc_mrr_at_100_diff1
value: 73.8056
- type: nauc_mrr_at_1000_max
value: 56.118199999999995
- type: nauc_mrr_at_1000_std
value: -5.723000000000001
- type: nauc_mrr_at_1000_diff1
value: 73.8034
- type: main_score
value: 77.681
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.9528
- type: f1
value: 96.95270000000001
- type: f1_weighted
value: 96.95270000000001
- type: ap
value: 95.35130000000001
- type: ap_weighted
value: 95.35130000000001
- type: main_score
value: 96.9528
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: ndcg_at_1
value: 25.759
- type: ndcg_at_3
value: 37.159
- type: ndcg_at_5
value: 41.388999999999996
- type: ndcg_at_10
value: 45.31
- type: ndcg_at_20
value: 47.912
- type: ndcg_at_100
value: 50.719
- type: ndcg_at_1000
value: 51.636
- type: map_at_1
value: 25.056
- type: map_at_3
value: 34.059
- type: map_at_5
value: 36.424
- type: map_at_10
value: 38.099
- type: map_at_20
value: 38.834
- type: map_at_100
value: 39.245999999999995
- type: map_at_1000
value: 39.285
- type: recall_at_1
value: 25.056
- type: recall_at_3
value: 45.414
- type: recall_at_5
value: 55.595000000000006
- type: recall_at_10
value: 67.43900000000001
- type: recall_at_20
value: 77.544
- type: recall_at_100
value: 92.23599999999999
- type: recall_at_1000
value: 99.087
- type: precision_at_1
value: 25.759
- type: precision_at_3
value: 15.716
- type: precision_at_5
value: 11.579
- type: precision_at_10
value: 7.063999999999999
- type: precision_at_20
value: 4.074
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.105
- type: mrr_at_1
value: 25.759300000000003
- type: mrr_at_3
value: 34.7827
- type: mrr_at_5
value: 37.1122
- type: mrr_at_10
value: 38.702799999999996
- type: mrr_at_20
value: 39.4035
- type: mrr_at_100
value: 39.7874
- type: mrr_at_1000
value: 39.820499999999996
- type: nauc_ndcg_at_1_max
value: 7.6849
- type: nauc_ndcg_at_1_std
value: -20.9503
- type: nauc_ndcg_at_1_diff1
value: 39.0921
- type: nauc_ndcg_at_3_max
value: 8.9183
- type: nauc_ndcg_at_3_std
value: -24.5722
- type: nauc_ndcg_at_3_diff1
value: 35.3955
- type: nauc_ndcg_at_5_max
value: 9.3112
- type: nauc_ndcg_at_5_std
value: -24.8615
- type: nauc_ndcg_at_5_diff1
value: 34.9193
- type: nauc_ndcg_at_10_max
value: 9.9335
- type: nauc_ndcg_at_10_std
value: -24.2891
- type: nauc_ndcg_at_10_diff1
value: 35.2354
- type: nauc_ndcg_at_20_max
value: 10.1047
- type: nauc_ndcg_at_20_std
value: -22.538
- type: nauc_ndcg_at_20_diff1
value: 35.370200000000004
- type: nauc_ndcg_at_100_max
value: 9.9511
- type: nauc_ndcg_at_100_std
value: -21.182699999999997
- type: nauc_ndcg_at_100_diff1
value: 35.8301
- type: nauc_ndcg_at_1000_max
value: 9.7745
- type: nauc_ndcg_at_1000_std
value: -22.1117
- type: nauc_ndcg_at_1000_diff1
value: 35.7943
- type: nauc_map_at_1_max
value: 7.7292
- type: nauc_map_at_1_std
value: -20.9013
- type: nauc_map_at_1_diff1
value: 39.1258
- type: nauc_map_at_3_max
value: 8.6472
- type: nauc_map_at_3_std
value: -23.8389
- type: nauc_map_at_3_diff1
value: 36.3462
- type: nauc_map_at_5_max
value: 8.8231
- type: nauc_map_at_5_std
value: -24.048
- type: nauc_map_at_5_diff1
value: 36.0689
- type: nauc_map_at_10_max
value: 9.064
- type: nauc_map_at_10_std
value: -23.788400000000003
- type: nauc_map_at_10_diff1
value: 36.193
- type: nauc_map_at_20_max
value: 9.0944
- type: nauc_map_at_20_std
value: -23.3413
- type: nauc_map_at_20_diff1
value: 36.2575
- type: nauc_map_at_100_max
value: 9.0852
- type: nauc_map_at_100_std
value: -23.1205
- type: nauc_map_at_100_diff1
value: 36.314
- type: nauc_map_at_1000_max
value: 9.0815
- type: nauc_map_at_1000_std
value: -23.1446
- type: nauc_map_at_1000_diff1
value: 36.3137
- type: nauc_recall_at_1_max
value: 7.7292
- type: nauc_recall_at_1_std
value: -20.9013
- type: nauc_recall_at_1_diff1
value: 39.1258
- type: nauc_recall_at_3_max
value: 9.8077
- type: nauc_recall_at_3_std
value: -26.5565
- type: nauc_recall_at_3_diff1
value: 32.7773
- type: nauc_recall_at_5_max
value: 10.801
- type: nauc_recall_at_5_std
value: -27.3276
- type: nauc_recall_at_5_diff1
value: 31.4183
- type: nauc_recall_at_10_max
value: 13.324
- type: nauc_recall_at_10_std
value: -25.866699999999998
- type: nauc_recall_at_10_diff1
value: 31.8682
- type: nauc_recall_at_20_max
value: 15.2863
- type: nauc_recall_at_20_std
value: -16.487299999999998
- type: nauc_recall_at_20_diff1
value: 31.4332
- type: nauc_recall_at_100_max
value: 19.4577
- type: nauc_recall_at_100_std
value: 14.5535
- type: nauc_recall_at_100_diff1
value: 34.865899999999996
- type: nauc_recall_at_1000_max
value: 48.2992
- type: nauc_recall_at_1000_std
value: 61.405100000000004
- type: nauc_recall_at_1000_diff1
value: 23.533299999999997
- type: nauc_precision_at_1_max
value: 7.6849
- type: nauc_precision_at_1_std
value: -20.9503
- type: nauc_precision_at_1_diff1
value: 39.0921
- type: nauc_precision_at_3_max
value: 9.224400000000001
- type: nauc_precision_at_3_std
value: -26.7181
- type: nauc_precision_at_3_diff1
value: 32.0185
- type: nauc_precision_at_5_max
value: 10.2675
- type: nauc_precision_at_5_std
value: -26.803700000000003
- type: nauc_precision_at_5_diff1
value: 30.189300000000003
- type: nauc_precision_at_10_max
value: 12.128400000000001
- type: nauc_precision_at_10_std
value: -23.18
- type: nauc_precision_at_10_diff1
value: 28.5595
- type: nauc_precision_at_20_max
value: 13.1152
- type: nauc_precision_at_20_std
value: -12.6363
- type: nauc_precision_at_20_diff1
value: 25.7907
- type: nauc_precision_at_100_max
value: 11.881400000000001
- type: nauc_precision_at_100_std
value: 12.4388
- type: nauc_precision_at_100_diff1
value: 15.3289
- type: nauc_precision_at_1000_max
value: 5.5209
- type: nauc_precision_at_1000_std
value: 11.001900000000001
- type: nauc_precision_at_1000_diff1
value: -3.0637000000000003
- type: nauc_mrr_at_1_max
value: 7.6849
- type: nauc_mrr_at_1_std
value: -20.9503
- type: nauc_mrr_at_1_diff1
value: 39.0921
- type: nauc_mrr_at_3_max
value: 8.454699999999999
- type: nauc_mrr_at_3_std
value: -23.5899
- type: nauc_mrr_at_3_diff1
value: 36.077
- type: nauc_mrr_at_5_max
value: 8.7245
- type: nauc_mrr_at_5_std
value: -23.6374
- type: nauc_mrr_at_5_diff1
value: 35.844
- type: nauc_mrr_at_10_max
value: 8.9177
- type: nauc_mrr_at_10_std
value: -23.457800000000002
- type: nauc_mrr_at_10_diff1
value: 36.02
- type: nauc_mrr_at_20_max
value: 8.9659
- type: nauc_mrr_at_20_std
value: -23.0174
- type: nauc_mrr_at_20_diff1
value: 36.050900000000006
- type: nauc_mrr_at_100_max
value: 8.9461
- type: nauc_mrr_at_100_std
value: -22.8363
- type: nauc_mrr_at_100_diff1
value: 36.1078
- type: nauc_mrr_at_1000_max
value: 8.940299999999999
- type: nauc_mrr_at_1000_std
value: -22.8606
- type: nauc_mrr_at_1000_diff1
value: 36.106700000000004
- type: main_score
value: 45.31
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.1701
- type: f1
value: 99.07639999999999
- type: f1_weighted
value: 99.1712
- type: main_score
value: 99.1701
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.48429999999999
- type: f1
value: 72.55080000000001
- type: f1_weighted
value: 91.72449999999999
- type: main_score
value: 91.48429999999999
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 85.306
- type: f1
value: 83.1076
- type: f1_weighted
value: 84.4685
- type: main_score
value: 85.306
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 91.17349999999999
- type: f1
value: 90.15950000000001
- type: f1_weighted
value: 90.802
- type: main_score
value: 91.17349999999999
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.7494
- type: v_measure_std
value: 1.2241
- type: main_score
value: 46.7494
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.9016
- type: v_measure_std
value: 1.3978000000000002
- type: main_score
value: 44.9016
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: map
value: 32.628099999999996
- type: mrr
value: 33.8493
- type: nAUC_map_max
value: -21.0204
- type: nAUC_map_std
value: -5.1638
- type: nAUC_map_diff1
value: 10.4531
- type: nAUC_mrr_max
value: -15.85
- type: nAUC_mrr_std
value: -2.7011
- type: nAUC_mrr_diff1
value: 9.8352
- type: main_score
value: 32.628099999999996
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: ndcg_at_1
value: 54.954
- type: ndcg_at_3
value: 49.876
- type: ndcg_at_5
value: 47.038999999999994
- type: ndcg_at_10
value: 43.313
- type: ndcg_at_20
value: 41.086
- type: ndcg_at_100
value: 40.849999999999994
- type: ndcg_at_1000
value: 49.852000000000004
- type: map_at_1
value: 7.412000000000001
- type: map_at_3
value: 12.438
- type: map_at_5
value: 14.448
- type: map_at_10
value: 17.080000000000002
- type: map_at_20
value: 19.273
- type: map_at_100
value: 22.494
- type: map_at_1000
value: 24.490000000000002
- type: recall_at_1
value: 7.412000000000001
- type: recall_at_3
value: 13.682
- type: recall_at_5
value: 16.595
- type: recall_at_10
value: 21.726
- type: recall_at_20
value: 26.895999999999997
- type: recall_at_100
value: 42.345
- type: recall_at_1000
value: 74.752
- type: precision_at_1
value: 56.65599999999999
- type: precision_at_3
value: 46.646
- type: precision_at_5
value: 40.619
- type: precision_at_10
value: 31.796000000000003
- type: precision_at_20
value: 24.195
- type: precision_at_100
value: 10.65
- type: precision_at_1000
value: 2.4570000000000003
- type: mrr_at_1
value: 56.6563
- type: mrr_at_3
value: 63.2095
- type: mrr_at_5
value: 63.998999999999995
- type: mrr_at_10
value: 64.6678
- type: mrr_at_20
value: 64.96079999999999
- type: mrr_at_100
value: 65.13550000000001
- type: mrr_at_1000
value: 65.1606
- type: nauc_ndcg_at_1_max
value: 36.932900000000004
- type: nauc_ndcg_at_1_std
value: 18.8188
- type: nauc_ndcg_at_1_diff1
value: 19.8327
- type: nauc_ndcg_at_3_max
value: 37.044
- type: nauc_ndcg_at_3_std
value: 17.2918
- type: nauc_ndcg_at_3_diff1
value: 12.0583
- type: nauc_ndcg_at_5_max
value: 36.1203
- type: nauc_ndcg_at_5_std
value: 17.3222
- type: nauc_ndcg_at_5_diff1
value: 10.6337
- type: nauc_ndcg_at_10_max
value: 35.2176
- type: nauc_ndcg_at_10_std
value: 19.4608
- type: nauc_ndcg_at_10_diff1
value: 9.1899
- type: nauc_ndcg_at_20_max
value: 34.3354
- type: nauc_ndcg_at_20_std
value: 20.2528
- type: nauc_ndcg_at_20_diff1
value: 8.4017
- type: nauc_ndcg_at_100_max
value: 36.2388
- type: nauc_ndcg_at_100_std
value: 21.578500000000002
- type: nauc_ndcg_at_100_diff1
value: 10.639899999999999
- type: nauc_ndcg_at_1000_max
value: 42.6963
- type: nauc_ndcg_at_1000_std
value: 26.8738
- type: nauc_ndcg_at_1000_diff1
value: 10.3058
- type: nauc_map_at_1_max
value: 3.4625999999999997
- type: nauc_map_at_1_std
value: -18.5752
- type: nauc_map_at_1_diff1
value: 42.8963
- type: nauc_map_at_3_max
value: 10.6291
- type: nauc_map_at_3_std
value: -13.977500000000001
- type: nauc_map_at_3_diff1
value: 31.7847
- type: nauc_map_at_5_max
value: 13.558200000000001
- type: nauc_map_at_5_std
value: -13.120499999999998
- type: nauc_map_at_5_diff1
value: 29.016199999999998
- type: nauc_map_at_10_max
value: 18.4615
- type: nauc_map_at_10_std
value: -8.2696
- type: nauc_map_at_10_diff1
value: 24.9417
- type: nauc_map_at_20_max
value: 22.4656
- type: nauc_map_at_20_std
value: -3.5717
- type: nauc_map_at_20_diff1
value: 21.6654
- type: nauc_map_at_100_max
value: 26.3009
- type: nauc_map_at_100_std
value: 4.6873
- type: nauc_map_at_100_diff1
value: 18.1131
- type: nauc_map_at_1000_max
value: 26.965
- type: nauc_map_at_1000_std
value: 8.6958
- type: nauc_map_at_1000_diff1
value: 15.898499999999999
- type: nauc_recall_at_1_max
value: 3.4625999999999997
- type: nauc_recall_at_1_std
value: -18.5752
- type: nauc_recall_at_1_diff1
value: 42.8963
- type: nauc_recall_at_3_max
value: 8.9686
- type: nauc_recall_at_3_std
value: -13.836599999999999
- type: nauc_recall_at_3_diff1
value: 27.528799999999997
- type: nauc_recall_at_5_max
value: 11.818900000000001
- type: nauc_recall_at_5_std
value: -13.414499999999999
- type: nauc_recall_at_5_diff1
value: 24.535899999999998
- type: nauc_recall_at_10_max
value: 15.515300000000002
- type: nauc_recall_at_10_std
value: -7.9226
- type: nauc_recall_at_10_diff1
value: 19.0975
- type: nauc_recall_at_20_max
value: 18.786
- type: nauc_recall_at_20_std
value: -3.7244
- type: nauc_recall_at_20_diff1
value: 13.8233
- type: nauc_recall_at_100_max
value: 21.3034
- type: nauc_recall_at_100_std
value: 9.569600000000001
- type: nauc_recall_at_100_diff1
value: 4.5838
- type: nauc_recall_at_1000_max
value: 18.991
- type: nauc_recall_at_1000_std
value: 14.690100000000001
- type: nauc_recall_at_1000_diff1
value: -6.4048
- type: nauc_precision_at_1_max
value: 38.888099999999994
- type: nauc_precision_at_1_std
value: 19.6065
- type: nauc_precision_at_1_diff1
value: 18.8443
- type: nauc_precision_at_3_max
value: 38.5528
- type: nauc_precision_at_3_std
value: 22.456
- type: nauc_precision_at_3_diff1
value: -0.0451
- type: nauc_precision_at_5_max
value: 36.9122
- type: nauc_precision_at_5_std
value: 24.6269
- type: nauc_precision_at_5_diff1
value: -5.5478
- type: nauc_precision_at_10_max
value: 35.3133
- type: nauc_precision_at_10_std
value: 32.6397
- type: nauc_precision_at_10_diff1
value: -12.6832
- type: nauc_precision_at_20_max
value: 31.1714
- type: nauc_precision_at_20_std
value: 37.2027
- type: nauc_precision_at_20_diff1
value: -15.146
- type: nauc_precision_at_100_max
value: 16.4786
- type: nauc_precision_at_100_std
value: 41.3436
- type: nauc_precision_at_100_diff1
value: -19.1827
- type: nauc_precision_at_1000_max
value: 5.0432999999999995
- type: nauc_precision_at_1000_std
value: 34.5184
- type: nauc_precision_at_1000_diff1
value: -18.926399999999997
- type: nauc_mrr_at_1_max
value: 38.888099999999994
- type: nauc_mrr_at_1_std
value: 19.6065
- type: nauc_mrr_at_1_diff1
value: 18.8443
- type: nauc_mrr_at_3_max
value: 43.250699999999995
- type: nauc_mrr_at_3_std
value: 23.7515
- type: nauc_mrr_at_3_diff1
value: 18.895899999999997
- type: nauc_mrr_at_5_max
value: 43.845600000000005
- type: nauc_mrr_at_5_std
value: 23.7064
- type: nauc_mrr_at_5_diff1
value: 19.9659
- type: nauc_mrr_at_10_max
value: 44.075900000000004
- type: nauc_mrr_at_10_std
value: 24.7276
- type: nauc_mrr_at_10_diff1
value: 19.4665
- type: nauc_mrr_at_20_max
value: 44.169599999999996
- type: nauc_mrr_at_20_std
value: 24.6675
- type: nauc_mrr_at_20_diff1
value: 19.1246
- type: nauc_mrr_at_100_max
value: 44.1064
- type: nauc_mrr_at_100_std
value: 24.5961
- type: nauc_mrr_at_100_diff1
value: 19.1742
- type: nauc_mrr_at_1000_max
value: 44.083099999999995
- type: nauc_mrr_at_1000_std
value: 24.5796
- type: nauc_mrr_at_1000_diff1
value: 19.1781
- type: main_score
value: 43.313
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: ndcg_at_1
value: 52.751999999999995
- type: ndcg_at_3
value: 65.18599999999999
- type: ndcg_at_5
value: 68.931
- type: ndcg_at_10
value: 71.487
- type: ndcg_at_20
value: 72.694
- type: ndcg_at_100
value: 73.603
- type: ndcg_at_1000
value: 73.772
- type: map_at_1
value: 47.446
- type: map_at_3
value: 60.94500000000001
- type: map_at_5
value: 63.33
- type: map_at_10
value: 64.581
- type: map_at_20
value: 64.97800000000001
- type: map_at_100
value: 65.142
- type: map_at_1000
value: 65.15100000000001
- type: recall_at_1
value: 47.446
- type: recall_at_3
value: 74.13300000000001
- type: recall_at_5
value: 82.575
- type: recall_at_10
value: 89.827
- type: recall_at_20
value: 94.238
- type: recall_at_100
value: 98.646
- type: recall_at_1000
value: 99.846
- type: precision_at_1
value: 52.751999999999995
- type: precision_at_3
value: 28.910999999999998
- type: precision_at_5
value: 19.606
- type: precision_at_10
value: 10.8
- type: precision_at_20
value: 5.694
- type: precision_at_100
value: 1.199
- type: precision_at_1000
value: 0.122
- type: mrr_at_1
value: 52.751999999999995
- type: mrr_at_3
value: 64.10289999999999
- type: mrr_at_5
value: 65.8613
- type: mrr_at_10
value: 66.686
- type: mrr_at_20
value: 66.9409
- type: mrr_at_100
value: 67.0379
- type: mrr_at_1000
value: 67.0432
- type: nauc_ndcg_at_1_max
value: 30.3808
- type: nauc_ndcg_at_1_std
value: -5.9129000000000005
- type: nauc_ndcg_at_1_diff1
value: 47.0165
- type: nauc_ndcg_at_3_max
value: 35.7466
- type: nauc_ndcg_at_3_std
value: -8.3171
- type: nauc_ndcg_at_3_diff1
value: 41.8012
- type: nauc_ndcg_at_5_max
value: 36.5622
- type: nauc_ndcg_at_5_std
value: -7.2429
- type: nauc_ndcg_at_5_diff1
value: 41.8656
- type: nauc_ndcg_at_10_max
value: 36.8005
- type: nauc_ndcg_at_10_std
value: -6.1641
- type: nauc_ndcg_at_10_diff1
value: 42.9286
- type: nauc_ndcg_at_20_max
value: 36.518
- type: nauc_ndcg_at_20_std
value: -5.389200000000001
- type: nauc_ndcg_at_20_diff1
value: 43.249500000000005
- type: nauc_ndcg_at_100_max
value: 35.918
- type: nauc_ndcg_at_100_std
value: -5.2897
- type: nauc_ndcg_at_100_diff1
value: 43.4364
- type: nauc_ndcg_at_1000_max
value: 35.5555
- type: nauc_ndcg_at_1000_std
value: -5.657100000000001
- type: nauc_ndcg_at_1000_diff1
value: 43.2952
- type: nauc_map_at_1_max
value: 27.845399999999998
- type: nauc_map_at_1_std
value: -8.4687
- type: nauc_map_at_1_diff1
value: 46.1481
- type: nauc_map_at_3_max
value: 33.8624
- type: nauc_map_at_3_std
value: -8.7464
- type: nauc_map_at_3_diff1
value: 42.821
- type: nauc_map_at_5_max
value: 34.3698
- type: nauc_map_at_5_std
value: -7.893600000000001
- type: nauc_map_at_5_diff1
value: 42.869600000000005
- type: nauc_map_at_10_max
value: 34.469899999999996
- type: nauc_map_at_10_std
value: -7.372199999999999
- type: nauc_map_at_10_diff1
value: 43.2528
- type: nauc_map_at_20_max
value: 34.4121
- type: nauc_map_at_20_std
value: -7.1491
- type: nauc_map_at_20_diff1
value: 43.3789
- type: nauc_map_at_100_max
value: 34.3433
- type: nauc_map_at_100_std
value: -7.0851999999999995
- type: nauc_map_at_100_diff1
value: 43.3953
- type: nauc_map_at_1000_max
value: 34.3293
- type: nauc_map_at_1000_std
value: -7.096900000000001
- type: nauc_map_at_1000_diff1
value: 43.3893
- type: nauc_recall_at_1_max
value: 27.845399999999998
- type: nauc_recall_at_1_std
value: -8.4687
- type: nauc_recall_at_1_diff1
value: 46.1481
- type: nauc_recall_at_3_max
value: 39.4993
- type: nauc_recall_at_3_std
value: -10.2782
- type: nauc_recall_at_3_diff1
value: 36.239399999999996
- type: nauc_recall_at_5_max
value: 43.7836
- type: nauc_recall_at_5_std
value: -8.4871
- type: nauc_recall_at_5_diff1
value: 34.7778
- type: nauc_recall_at_10_max
value: 50.5553
- type: nauc_recall_at_10_std
value: -3.7254
- type: nauc_recall_at_10_diff1
value: 38.756800000000005
- type: nauc_recall_at_20_max
value: 56.6151
- type: nauc_recall_at_20_std
value: 7.4285000000000005
- type: nauc_recall_at_20_diff1
value: 39.919
- type: nauc_recall_at_100_max
value: 79.1225
- type: nauc_recall_at_100_std
value: 45.264700000000005
- type: nauc_recall_at_100_diff1
value: 52.2652
- type: nauc_recall_at_1000_max
value: 75.74210000000001
- type: nauc_recall_at_1000_std
value: 71.3295
- type: nauc_recall_at_1000_diff1
value: -2.503
- type: nauc_precision_at_1_max
value: 30.3808
- type: nauc_precision_at_1_std
value: -5.9129000000000005
- type: nauc_precision_at_1_diff1
value: 47.0165
- type: nauc_precision_at_3_max
value: 30.1298
- type: nauc_precision_at_3_std
value: 0.9233999999999999
- type: nauc_precision_at_3_diff1
value: 18.6972
- type: nauc_precision_at_5_max
value: 24.6955
- type: nauc_precision_at_5_std
value: 6.4975000000000005
- type: nauc_precision_at_5_diff1
value: 9.0141
- type: nauc_precision_at_10_max
value: 17.7006
- type: nauc_precision_at_10_std
value: 12.1744
- type: nauc_precision_at_10_diff1
value: 1.5388000000000002
- type: nauc_precision_at_20_max
value: 11.473600000000001
- type: nauc_precision_at_20_std
value: 16.2131
- type: nauc_precision_at_20_diff1
value: -4.3109
- type: nauc_precision_at_100_max
value: 3.5707999999999998
- type: nauc_precision_at_100_std
value: 18.5558
- type: nauc_precision_at_100_diff1
value: -10.8589
- type: nauc_precision_at_1000_max
value: -0.23800000000000002
- type: nauc_precision_at_1000_std
value: 16.941300000000002
- type: nauc_precision_at_1000_diff1
value: -13.7766
- type: nauc_mrr_at_1_max
value: 30.3808
- type: nauc_mrr_at_1_std
value: -5.9129000000000005
- type: nauc_mrr_at_1_diff1
value: 47.0165
- type: nauc_mrr_at_3_max
value: 35.058099999999996
- type: nauc_mrr_at_3_std
value: -5.724
- type: nauc_mrr_at_3_diff1
value: 43.3839
- type: nauc_mrr_at_5_max
value: 35.1297
- type: nauc_mrr_at_5_std
value: -5.3953
- type: nauc_mrr_at_5_diff1
value: 43.6497
- type: nauc_mrr_at_10_max
value: 35.001599999999996
- type: nauc_mrr_at_10_std
value: -5.2087
- type: nauc_mrr_at_10_diff1
value: 44.1327
- type: nauc_mrr_at_20_max
value: 34.891299999999994
- type: nauc_mrr_at_20_std
value: -5.0933
- type: nauc_mrr_at_20_diff1
value: 44.163799999999995
- type: nauc_mrr_at_100_max
value: 34.8054
- type: nauc_mrr_at_100_std
value: -5.136
- type: nauc_mrr_at_100_diff1
value: 44.1738
- type: nauc_mrr_at_1000_max
value: 34.795
- type: nauc_mrr_at_1000_std
value: -5.1476
- type: nauc_mrr_at_1000_diff1
value: 44.169599999999996
- type: main_score
value: 71.487
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: ndcg_at_1
value: 83.3
- type: ndcg_at_3
value: 87.50099999999999
- type: ndcg_at_5
value: 88.998
- type: ndcg_at_10
value: 90.038
- type: ndcg_at_20
value: 90.619
- type: ndcg_at_100
value: 91.049
- type: ndcg_at_1000
value: 91.105
- type: map_at_1
value: 72.424
- type: map_at_3
value: 83.76599999999999
- type: map_at_5
value: 85.616
- type: map_at_10
value: 86.626
- type: map_at_20
value: 87.034
- type: map_at_100
value: 87.233
- type: map_at_1000
value: 87.24499999999999
- type: recall_at_1
value: 72.424
- type: recall_at_3
value: 89.155
- type: recall_at_5
value: 93.425
- type: recall_at_10
value: 96.514
- type: recall_at_20
value: 98.334
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.997
- type: precision_at_1
value: 83.3
- type: precision_at_3
value: 38.393
- type: precision_at_5
value: 25.228
- type: precision_at_10
value: 13.647
- type: precision_at_20
value: 7.225
- type: precision_at_100
value: 1.543
- type: precision_at_1000
value: 0.157
- type: mrr_at_1
value: 83.3
- type: mrr_at_3
value: 88.3
- type: mrr_at_5
value: 88.90350000000001
- type: mrr_at_10
value: 89.13839999999999
- type: mrr_at_20
value: 89.1953
- type: mrr_at_100
value: 89.21170000000001
- type: mrr_at_1000
value: 89.212
- type: nauc_ndcg_at_1_max
value: 36.066500000000005
- type: nauc_ndcg_at_1_std
value: -42.5286
- type: nauc_ndcg_at_1_diff1
value: 79.3747
- type: nauc_ndcg_at_3_max
value: 33.2171
- type: nauc_ndcg_at_3_std
value: -50.6815
- type: nauc_ndcg_at_3_diff1
value: 77.3029
- type: nauc_ndcg_at_5_max
value: 33.578
- type: nauc_ndcg_at_5_std
value: -51.302400000000006
- type: nauc_ndcg_at_5_diff1
value: 77.73219999999999
- type: nauc_ndcg_at_10_max
value: 34.1893
- type: nauc_ndcg_at_10_std
value: -51.231
- type: nauc_ndcg_at_10_diff1
value: 77.91510000000001
- type: nauc_ndcg_at_20_max
value: 34.7652
- type: nauc_ndcg_at_20_std
value: -49.2675
- type: nauc_ndcg_at_20_diff1
value: 77.8851
- type: nauc_ndcg_at_100_max
value: 35.1212
- type: nauc_ndcg_at_100_std
value: -47.1897
- type: nauc_ndcg_at_100_diff1
value: 77.8819
- type: nauc_ndcg_at_1000_max
value: 35.2096
- type: nauc_ndcg_at_1000_std
value: -47.1269
- type: nauc_ndcg_at_1000_diff1
value: 77.874
- type: nauc_map_at_1_max
value: 25.3507
- type: nauc_map_at_1_std
value: -42.6645
- type: nauc_map_at_1_diff1
value: 82.1013
- type: nauc_map_at_3_max
value: 30.685000000000002
- type: nauc_map_at_3_std
value: -52.8243
- type: nauc_map_at_3_diff1
value: 78.8106
- type: nauc_map_at_5_max
value: 32.2752
- type: nauc_map_at_5_std
value: -52.546800000000005
- type: nauc_map_at_5_diff1
value: 78.4853
- type: nauc_map_at_10_max
value: 33.239000000000004
- type: nauc_map_at_10_std
value: -51.5082
- type: nauc_map_at_10_diff1
value: 78.1853
- type: nauc_map_at_20_max
value: 33.644200000000005
- type: nauc_map_at_20_std
value: -50.1154
- type: nauc_map_at_20_diff1
value: 78.0265
- type: nauc_map_at_100_max
value: 33.7945
- type: nauc_map_at_100_std
value: -49.1504
- type: nauc_map_at_100_diff1
value: 77.9653
- type: nauc_map_at_1000_max
value: 33.813500000000005
- type: nauc_map_at_1000_std
value: -49.119800000000005
- type: nauc_map_at_1000_diff1
value: 77.9596
- type: nauc_recall_at_1_max
value: 25.3507
- type: nauc_recall_at_1_std
value: -42.6645
- type: nauc_recall_at_1_diff1
value: 82.1013
- type: nauc_recall_at_3_max
value: 26.4941
- type: nauc_recall_at_3_std
value: -62.044399999999996
- type: nauc_recall_at_3_diff1
value: 75.3519
- type: nauc_recall_at_5_max
value: 26.5483
- type: nauc_recall_at_5_std
value: -68.6455
- type: nauc_recall_at_5_diff1
value: 74.2904
- type: nauc_recall_at_10_max
value: 28.0472
- type: nauc_recall_at_10_std
value: -80.47109999999999
- type: nauc_recall_at_10_diff1
value: 74.5668
- type: nauc_recall_at_20_max
value: 29.296699999999998
- type: nauc_recall_at_20_std
value: -77.2526
- type: nauc_recall_at_20_diff1
value: 73.5218
- type: nauc_recall_at_100_max
value: 29.6001
- type: nauc_recall_at_100_std
value: -36.9144
- type: nauc_recall_at_100_diff1
value: 71.9031
- type: nauc_recall_at_1000_max
value: 65.9499
- type: nauc_recall_at_1000_std
value: 65.2771
- type: nauc_recall_at_1000_diff1
value: 68.762
- type: nauc_precision_at_1_max
value: 36.066500000000005
- type: nauc_precision_at_1_std
value: -42.5286
- type: nauc_precision_at_1_diff1
value: 79.3747
- type: nauc_precision_at_3_max
value: 5.2775
- type: nauc_precision_at_3_std
value: 5.7708
- type: nauc_precision_at_3_diff1
value: -24.618100000000002
- type: nauc_precision_at_5_max
value: 0.9323
- type: nauc_precision_at_5_std
value: 17.5438
- type: nauc_precision_at_5_diff1
value: -36.0304
- type: nauc_precision_at_10_max
value: -2.321
- type: nauc_precision_at_10_std
value: 26.4542
- type: nauc_precision_at_10_diff1
value: -42.2061
- type: nauc_precision_at_20_max
value: -4.122
- type: nauc_precision_at_20_std
value: 33.5662
- type: nauc_precision_at_20_diff1
value: -44.6324
- type: nauc_precision_at_100_max
value: -5.6459
- type: nauc_precision_at_100_std
value: 40.1704
- type: nauc_precision_at_100_diff1
value: -45.8818
- type: nauc_precision_at_1000_max
value: -5.5985
- type: nauc_precision_at_1000_std
value: 40.402100000000004
- type: nauc_precision_at_1000_diff1
value: -46.054
- type: nauc_mrr_at_1_max
value: 36.066500000000005
- type: nauc_mrr_at_1_std
value: -42.5286
- type: nauc_mrr_at_1_diff1
value: 79.3747
- type: nauc_mrr_at_3_max
value: 36.3166
- type: nauc_mrr_at_3_std
value: -45.0789
- type: nauc_mrr_at_3_diff1
value: 78.3276
- type: nauc_mrr_at_5_max
value: 36.365700000000004
- type: nauc_mrr_at_5_std
value: -45.1793
- type: nauc_mrr_at_5_diff1
value: 78.5424
- type: nauc_mrr_at_10_max
value: 36.3128
- type: nauc_mrr_at_10_std
value: -45.0655
- type: nauc_mrr_at_10_diff1
value: 78.5756
- type: nauc_mrr_at_20_max
value: 36.327
- type: nauc_mrr_at_20_std
value: -44.853500000000004
- type: nauc_mrr_at_20_diff1
value: 78.57220000000001
- type: nauc_mrr_at_100_max
value: 36.3322
- type: nauc_mrr_at_100_std
value: -44.8025
- type: nauc_mrr_at_100_diff1
value: 78.57730000000001
- type: nauc_mrr_at_1000_max
value: 36.3314
- type: nauc_mrr_at_1000_std
value: -44.804100000000005
- type: nauc_mrr_at_1000_diff1
value: 78.5778
- type: main_score
value: 90.038
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.0223
- type: v_measure_std
value: 2.5644
- type: main_score
value: 73.0223
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 74.7154
- type: v_measure_std
value: 8.2851
- type: main_score
value: 74.7154
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: ndcg_at_1
value: 29.099999999999998
- type: ndcg_at_3
value: 22.891000000000002
- type: ndcg_at_5
value: 20.055
- type: ndcg_at_10
value: 24.637999999999998
- type: ndcg_at_20
value: 28.188999999999997
- type: ndcg_at_100
value: 34.589
- type: ndcg_at_1000
value: 39.93
- type: map_at_1
value: 5.903
- type: map_at_3
value: 10.375
- type: map_at_5
value: 12.503
- type: map_at_10
value: 14.997
- type: map_at_20
value: 16.383
- type: map_at_100
value: 17.769
- type: map_at_1000
value: 18.115000000000002
- type: recall_at_1
value: 5.903
- type: recall_at_3
value: 12.943
- type: recall_at_5
value: 17.818
- type: recall_at_10
value: 26.162999999999997
- type: recall_at_20
value: 34.46
- type: recall_at_100
value: 55.24
- type: recall_at_1000
value: 81.11500000000001
- type: precision_at_1
value: 29.099999999999998
- type: precision_at_3
value: 21.3
- type: precision_at_5
value: 17.599999999999998
- type: precision_at_10
value: 12.920000000000002
- type: precision_at_20
value: 8.5
- type: precision_at_100
value: 2.721
- type: precision_at_1000
value: 0.4
- type: mrr_at_1
value: 29.099999999999998
- type: mrr_at_3
value: 36.8667
- type: mrr_at_5
value: 38.9617
- type: mrr_at_10
value: 40.598600000000005
- type: mrr_at_20
value: 41.3132
- type: mrr_at_100
value: 41.7298
- type: mrr_at_1000
value: 41.7584
- type: nauc_ndcg_at_1_max
value: 18.4908
- type: nauc_ndcg_at_1_std
value: 8.3766
- type: nauc_ndcg_at_1_diff1
value: 22.0483
- type: nauc_ndcg_at_3_max
value: 19.1712
- type: nauc_ndcg_at_3_std
value: 10.407399999999999
- type: nauc_ndcg_at_3_diff1
value: 16.301299999999998
- type: nauc_ndcg_at_5_max
value: 20.069
- type: nauc_ndcg_at_5_std
value: 11.8195
- type: nauc_ndcg_at_5_diff1
value: 16.4224
- type: nauc_ndcg_at_10_max
value: 20.2641
- type: nauc_ndcg_at_10_std
value: 14.7043
- type: nauc_ndcg_at_10_diff1
value: 15.1067
- type: nauc_ndcg_at_20_max
value: 21.5015
- type: nauc_ndcg_at_20_std
value: 17.292199999999998
- type: nauc_ndcg_at_20_diff1
value: 14.1651
- type: nauc_ndcg_at_100_max
value: 23.2308
- type: nauc_ndcg_at_100_std
value: 22.425800000000002
- type: nauc_ndcg_at_100_diff1
value: 12.2911
- type: nauc_ndcg_at_1000_max
value: 23.4699
- type: nauc_ndcg_at_1000_std
value: 22.764499999999998
- type: nauc_ndcg_at_1000_diff1
value: 12.540000000000001
- type: nauc_map_at_1_max
value: 17.8236
- type: nauc_map_at_1_std
value: 8.1625
- type: nauc_map_at_1_diff1
value: 21.8362
- type: nauc_map_at_3_max
value: 19.1677
- type: nauc_map_at_3_std
value: 9.525699999999999
- type: nauc_map_at_3_diff1
value: 16.2302
- type: nauc_map_at_5_max
value: 20.3156
- type: nauc_map_at_5_std
value: 10.6218
- type: nauc_map_at_5_diff1
value: 16.2404
- type: nauc_map_at_10_max
value: 20.2029
- type: nauc_map_at_10_std
value: 12.9342
- type: nauc_map_at_10_diff1
value: 14.7992
- type: nauc_map_at_20_max
value: 20.7278
- type: nauc_map_at_20_std
value: 14.6676
- type: nauc_map_at_20_diff1
value: 14.0374
- type: nauc_map_at_100_max
value: 21.4579
- type: nauc_map_at_100_std
value: 16.490199999999998
- type: nauc_map_at_100_diff1
value: 13.5188
- type: nauc_map_at_1000_max
value: 21.5714
- type: nauc_map_at_1000_std
value: 16.746
- type: nauc_map_at_1000_diff1
value: 13.4916
- type: nauc_recall_at_1_max
value: 17.8236
- type: nauc_recall_at_1_std
value: 8.1625
- type: nauc_recall_at_1_diff1
value: 21.8362
- type: nauc_recall_at_3_max
value: 18.5764
- type: nauc_recall_at_3_std
value: 11.3546
- type: nauc_recall_at_3_diff1
value: 13.5547
- type: nauc_recall_at_5_max
value: 19.8008
- type: nauc_recall_at_5_std
value: 13.3324
- type: nauc_recall_at_5_diff1
value: 13.5029
- type: nauc_recall_at_10_max
value: 18.645400000000002
- type: nauc_recall_at_10_std
value: 17.5544
- type: nauc_recall_at_10_diff1
value: 10.4276
- type: nauc_recall_at_20_max
value: 19.9903
- type: nauc_recall_at_20_std
value: 21.9707
- type: nauc_recall_at_20_diff1
value: 7.863
- type: nauc_recall_at_100_max
value: 21.1196
- type: nauc_recall_at_100_std
value: 33.4524
- type: nauc_recall_at_100_diff1
value: 0.7107
- type: nauc_recall_at_1000_max
value: 20.188
- type: nauc_recall_at_1000_std
value: 39.786
- type: nauc_recall_at_1000_diff1
value: -4.0011
- type: nauc_precision_at_1_max
value: 18.4908
- type: nauc_precision_at_1_std
value: 8.3766
- type: nauc_precision_at_1_diff1
value: 22.0483
- type: nauc_precision_at_3_max
value: 19.178600000000003
- type: nauc_precision_at_3_std
value: 11.6744
- type: nauc_precision_at_3_diff1
value: 13.5931
- type: nauc_precision_at_5_max
value: 20.4581
- type: nauc_precision_at_5_std
value: 13.5076
- type: nauc_precision_at_5_diff1
value: 13.619600000000002
- type: nauc_precision_at_10_max
value: 19.437099999999997
- type: nauc_precision_at_10_std
value: 17.754900000000003
- type: nauc_precision_at_10_diff1
value: 10.6315
- type: nauc_precision_at_20_max
value: 20.9159
- type: nauc_precision_at_20_std
value: 21.9286
- type: nauc_precision_at_20_diff1
value: 8.251
- type: nauc_precision_at_100_max
value: 22.2412
- type: nauc_precision_at_100_std
value: 32.925900000000006
- type: nauc_precision_at_100_diff1
value: 1.5637999999999999
- type: nauc_precision_at_1000_max
value: 22.1
- type: nauc_precision_at_1000_std
value: 37.227700000000006
- type: nauc_precision_at_1000_diff1
value: -2.3548
- type: nauc_mrr_at_1_max
value: 18.4908
- type: nauc_mrr_at_1_std
value: 8.3766
- type: nauc_mrr_at_1_diff1
value: 22.0483
- type: nauc_mrr_at_3_max
value: 18.3602
- type: nauc_mrr_at_3_std
value: 9.579600000000001
- type: nauc_mrr_at_3_diff1
value: 18.7544
- type: nauc_mrr_at_5_max
value: 18.5224
- type: nauc_mrr_at_5_std
value: 9.747300000000001
- type: nauc_mrr_at_5_diff1
value: 19.0625
- type: nauc_mrr_at_10_max
value: 19.067
- type: nauc_mrr_at_10_std
value: 10.7103
- type: nauc_mrr_at_10_diff1
value: 19.444
- type: nauc_mrr_at_20_max
value: 19.118399999999998
- type: nauc_mrr_at_20_std
value: 10.6384
- type: nauc_mrr_at_20_diff1
value: 19.5223
- type: nauc_mrr_at_100_max
value: 19.200300000000002
- type: nauc_mrr_at_100_std
value: 10.6891
- type: nauc_mrr_at_100_diff1
value: 19.4009
- type: nauc_mrr_at_1000_max
value: 19.1676
- type: nauc_mrr_at_1000_std
value: 10.649799999999999
- type: nauc_mrr_at_1000_diff1
value: 19.4127
- type: main_score
value: 24.637999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: pearson
value: 86.8981
- type: spearman
value: 82.234
- type: cosine_pearson
value: 86.8981
- type: cosine_spearman
value: 82.234
- type: manhattan_pearson
value: 84.1165
- type: manhattan_spearman
value: 82.063
- type: euclidean_pearson
value: 84.0688
- type: euclidean_spearman
value: 82.2329
- type: main_score
value: 82.234
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: pearson
value: 87.8269
- type: spearman
value: 80.2578
- type: cosine_pearson
value: 87.8269
- type: cosine_spearman
value: 80.2578
- type: manhattan_pearson
value: 85.09020000000001
- type: manhattan_spearman
value: 80.5777
- type: euclidean_pearson
value: 84.71990000000001
- type: euclidean_spearman
value: 80.2547
- type: main_score
value: 80.2578
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: pearson
value: 89.45769999999999
- type: spearman
value: 89.69200000000001
- type: cosine_pearson
value: 89.45769999999999
- type: cosine_spearman
value: 89.69200000000001
- type: manhattan_pearson
value: 89.1685
- type: manhattan_spearman
value: 89.8889
- type: euclidean_pearson
value: 89.07969999999999
- type: euclidean_spearman
value: 89.7003
- type: main_score
value: 89.69200000000001
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: pearson
value: 87.01989999999999
- type: spearman
value: 85.3155
- type: cosine_pearson
value: 87.01989999999999
- type: cosine_spearman
value: 85.3155
- type: manhattan_pearson
value: 86.24159999999999
- type: manhattan_spearman
value: 85.544
- type: euclidean_pearson
value: 86.1032
- type: euclidean_spearman
value: 85.31490000000001
- type: main_score
value: 85.3155
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: pearson
value: 89.593
- type: spearman
value: 90.1574
- type: cosine_pearson
value: 89.593
- type: cosine_spearman
value: 90.1574
- type: manhattan_pearson
value: 89.93910000000001
- type: manhattan_spearman
value: 90.4357
- type: euclidean_pearson
value: 89.76650000000001
- type: euclidean_spearman
value: 90.1554
- type: main_score
value: 90.1574
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: pearson
value: 85.03540000000001
- type: spearman
value: 86.4191
- type: cosine_pearson
value: 85.03540000000001
- type: cosine_spearman
value: 86.4191
- type: manhattan_pearson
value: 85.8992
- type: manhattan_spearman
value: 86.7274
- type: euclidean_pearson
value: 85.638
- type: euclidean_spearman
value: 86.42
- type: main_score
value: 86.4191
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 54.1098
- type: spearman
value: 52.721
- type: cosine_pearson
value: 54.1098
- type: cosine_spearman
value: 52.721
- type: manhattan_pearson
value: 54.890899999999995
- type: manhattan_spearman
value: 52.727900000000005
- type: euclidean_pearson
value: 54.65410000000001
- type: euclidean_spearman
value: 52.8582
- type: main_score
value: 52.721
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 84.3262
- type: spearman
value: 84.1421
- type: cosine_pearson
value: 84.3262
- type: cosine_spearman
value: 84.1421
- type: manhattan_pearson
value: 83.8999
- type: manhattan_spearman
value: 83.9987
- type: euclidean_pearson
value: 84.0174
- type: euclidean_spearman
value: 84.1711
- type: main_score
value: 84.1421
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 91.38969999999999
- type: spearman
value: 91.00789999999999
- type: cosine_pearson
value: 91.38969999999999
- type: cosine_spearman
value: 91.00789999999999
- type: manhattan_pearson
value: 90.8917
- type: manhattan_spearman
value: 90.7834
- type: euclidean_pearson
value: 91.0361
- type: euclidean_spearman
value: 91.011
- type: main_score
value: 91.00789999999999
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 84.636
- type: spearman
value: 84.1977
- type: cosine_pearson
value: 84.636
- type: cosine_spearman
value: 84.1977
- type: manhattan_pearson
value: 84.6895
- type: manhattan_spearman
value: 84.1849
- type: euclidean_pearson
value: 84.753
- type: euclidean_spearman
value: 84.2251
- type: main_score
value: 84.1977
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 85.8595
- type: spearman
value: 86.9965
- type: cosine_pearson
value: 85.8595
- type: cosine_spearman
value: 86.9965
- type: manhattan_pearson
value: 86.92450000000001
- type: manhattan_spearman
value: 87.39489999999999
- type: euclidean_pearson
value: 86.5575
- type: euclidean_spearman
value: 86.997
- type: main_score
value: 86.9965
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 85.959
- type: spearman
value: 86.3103
- type: cosine_pearson
value: 85.959
- type: cosine_spearman
value: 86.3103
- type: manhattan_pearson
value: 85.7486
- type: manhattan_spearman
value: 86.0977
- type: euclidean_pearson
value: 85.8361
- type: euclidean_spearman
value: 86.2741
- type: main_score
value: 86.3103
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 86.2629
- type: spearman
value: 85.8372
- type: cosine_pearson
value: 86.2629
- type: cosine_spearman
value: 85.8372
- type: manhattan_pearson
value: 86.2491
- type: manhattan_spearman
value: 85.4973
- type: euclidean_pearson
value: 86.5074
- type: euclidean_spearman
value: 85.83370000000001
- type: main_score
value: 85.8372
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 57.5675
- type: spearman
value: 55.564899999999994
- type: cosine_pearson
value: 57.5675
- type: cosine_spearman
value: 55.564899999999994
- type: manhattan_pearson
value: 57.4686
- type: manhattan_spearman
value: 55.3816
- type: euclidean_pearson
value: 57.635099999999994
- type: euclidean_spearman
value: 55.501900000000006
- type: main_score
value: 55.564899999999994
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 76.1237
- type: spearman
value: 76.4
- type: cosine_pearson
value: 76.1237
- type: cosine_spearman
value: 76.4
- type: manhattan_pearson
value: 78.6794
- type: manhattan_spearman
value: 77.2282
- type: euclidean_pearson
value: 77.1688
- type: euclidean_spearman
value: 76.4
- type: main_score
value: 76.4
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 73.89099999999999
- type: spearman
value: 77.8292
- type: cosine_pearson
value: 73.89099999999999
- type: cosine_spearman
value: 77.8292
- type: manhattan_pearson
value: 77.53229999999999
- type: manhattan_spearman
value: 78.3926
- type: euclidean_pearson
value: 76.6734
- type: euclidean_spearman
value: 77.8721
- type: main_score
value: 77.8292
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 68.31099999999999
- type: spearman
value: 68.0764
- type: cosine_pearson
value: 68.31099999999999
- type: cosine_spearman
value: 68.0764
- type: manhattan_pearson
value: 69.7423
- type: manhattan_spearman
value: 68.3212
- type: euclidean_pearson
value: 69.5701
- type: euclidean_spearman
value: 68.2299
- type: main_score
value: 68.0764
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 65.0636
- type: spearman
value: 59.4669
- type: cosine_pearson
value: 65.0636
- type: cosine_spearman
value: 59.4669
- type: manhattan_pearson
value: 69.723
- type: manhattan_spearman
value: 60.3823
- type: euclidean_pearson
value: 68.6454
- type: euclidean_spearman
value: 59.384899999999995
- type: main_score
value: 59.4669
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 73.7603
- type: spearman
value: 73.0799
- type: cosine_pearson
value: 73.7603
- type: cosine_spearman
value: 73.0799
- type: manhattan_pearson
value: 76.2351
- type: manhattan_spearman
value: 73.60719999999999
- type: euclidean_pearson
value: 75.61529999999999
- type: euclidean_spearman
value: 72.9809
- type: main_score
value: 73.0799
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: pearson
value: 88.1242
- type: spearman
value: 88.8456
- type: cosine_pearson
value: 88.1242
- type: cosine_spearman
value: 88.8456
- type: manhattan_pearson
value: 88.6122
- type: manhattan_spearman
value: 89.0198
- type: euclidean_pearson
value: 88.5314
- type: euclidean_spearman
value: 88.8432
- type: main_score
value: 88.8456
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.06920000000001
- type: mrr
value: 96.977
- type: nAUC_map_max
value: 49.1778
- type: nAUC_map_std
value: 62.091499999999996
- type: nAUC_map_diff1
value: -13.373899999999999
- type: nAUC_mrr_max
value: 86.7325
- type: nAUC_mrr_std
value: 74.958
- type: nAUC_mrr_diff1
value: 25.6955
- type: main_score
value: 89.06920000000001
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: ndcg_at_1
value: 67.333
- type: ndcg_at_3
value: 75.142
- type: ndcg_at_5
value: 78.873
- type: ndcg_at_10
value: 80.372
- type: ndcg_at_20
value: 80.914
- type: ndcg_at_100
value: 81.563
- type: ndcg_at_1000
value: 81.601
- type: map_at_1
value: 63.99399999999999
- type: map_at_3
value: 72.181
- type: map_at_5
value: 74.665
- type: map_at_10
value: 75.51299999999999
- type: map_at_20
value: 75.693
- type: map_at_100
value: 75.78
- type: map_at_1000
value: 75.78099999999999
- type: recall_at_1
value: 63.99399999999999
- type: recall_at_3
value: 80.63900000000001
- type: recall_at_5
value: 90.017
- type: recall_at_10
value: 94.167
- type: recall_at_20
value: 96.167
- type: recall_at_100
value: 99.667
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 67.333
- type: precision_at_3
value: 29.444
- type: precision_at_5
value: 20.067
- type: precision_at_10
value: 10.667
- type: precision_at_20
value: 5.45
- type: precision_at_100
value: 1.127
- type: precision_at_1000
value: 0.11299999999999999
- type: mrr_at_1
value: 67.3333
- type: mrr_at_3
value: 73.9444
- type: mrr_at_5
value: 76.0444
- type: mrr_at_10
value: 76.4517
- type: mrr_at_20
value: 76.5739
- type: mrr_at_100
value: 76.6556
- type: mrr_at_1000
value: 76.6563
- type: nauc_ndcg_at_1_max
value: 55.373099999999994
- type: nauc_ndcg_at_1_std
value: -11.362
- type: nauc_ndcg_at_1_diff1
value: 81.5232
- type: nauc_ndcg_at_3_max
value: 58.0376
- type: nauc_ndcg_at_3_std
value: -14.4854
- type: nauc_ndcg_at_3_diff1
value: 77.848
- type: nauc_ndcg_at_5_max
value: 55.789699999999996
- type: nauc_ndcg_at_5_std
value: -15.0412
- type: nauc_ndcg_at_5_diff1
value: 77.7873
- type: nauc_ndcg_at_10_max
value: 58.6243
- type: nauc_ndcg_at_10_std
value: -13.867799999999999
- type: nauc_ndcg_at_10_diff1
value: 78.071
- type: nauc_ndcg_at_20_max
value: 58.9032
- type: nauc_ndcg_at_20_std
value: -12.7998
- type: nauc_ndcg_at_20_diff1
value: 78.12010000000001
- type: nauc_ndcg_at_100_max
value: 58.0772
- type: nauc_ndcg_at_100_std
value: -12.4604
- type: nauc_ndcg_at_100_diff1
value: 78.7002
- type: nauc_ndcg_at_1000_max
value: 58.0835
- type: nauc_ndcg_at_1000_std
value: -12.662799999999999
- type: nauc_ndcg_at_1000_diff1
value: 78.7157
- type: nauc_map_at_1_max
value: 45.2711
- type: nauc_map_at_1_std
value: -21.685
- type: nauc_map_at_1_diff1
value: 80.8536
- type: nauc_map_at_3_max
value: 53.970600000000005
- type: nauc_map_at_3_std
value: -18.0178
- type: nauc_map_at_3_diff1
value: 78.95309999999999
- type: nauc_map_at_5_max
value: 54.7643
- type: nauc_map_at_5_std
value: -15.943999999999999
- type: nauc_map_at_5_diff1
value: 78.9665
- type: nauc_map_at_10_max
value: 56.737899999999996
- type: nauc_map_at_10_std
value: -14.355599999999999
- type: nauc_map_at_10_diff1
value: 79.095
- type: nauc_map_at_20_max
value: 56.804
- type: nauc_map_at_20_std
value: -14.033599999999998
- type: nauc_map_at_20_diff1
value: 79.0729
- type: nauc_map_at_100_max
value: 56.7168
- type: nauc_map_at_100_std
value: -14.000000000000002
- type: nauc_map_at_100_diff1
value: 79.1456
- type: nauc_map_at_1000_max
value: 56.7168
- type: nauc_map_at_1000_std
value: -14.003099999999998
- type: nauc_map_at_1000_diff1
value: 79.1458
- type: nauc_recall_at_1_max
value: 45.2711
- type: nauc_recall_at_1_std
value: -21.685
- type: nauc_recall_at_1_diff1
value: 80.8536
- type: nauc_recall_at_3_max
value: 55.911100000000005
- type: nauc_recall_at_3_std
value: -19.1603
- type: nauc_recall_at_3_diff1
value: 72.51689999999999
- type: nauc_recall_at_5_max
value: 50.6317
- type: nauc_recall_at_5_std
value: -21.4004
- type: nauc_recall_at_5_diff1
value: 68.3824
- type: nauc_recall_at_10_max
value: 67.3176
- type: nauc_recall_at_10_std
value: -23.6601
- type: nauc_recall_at_10_diff1
value: 67.20150000000001
- type: nauc_recall_at_20_max
value: 76.566
- type: nauc_recall_at_20_std
value: -12.4873
- type: nauc_recall_at_20_diff1
value: 63.6849
- type: nauc_recall_at_100_max
value: 55.415499999999994
- type: nauc_recall_at_100_std
value: 72.2222
- type: nauc_recall_at_100_diff1
value: 72.2222
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 55.373099999999994
- type: nauc_precision_at_1_std
value: -11.362
- type: nauc_precision_at_1_diff1
value: 81.5232
- type: nauc_precision_at_3_max
value: 59.23969999999999
- type: nauc_precision_at_3_std
value: 17.3868
- type: nauc_precision_at_3_diff1
value: 37.0438
- type: nauc_precision_at_5_max
value: 51.358599999999996
- type: nauc_precision_at_5_std
value: 39.9722
- type: nauc_precision_at_5_diff1
value: 9.2927
- type: nauc_precision_at_10_max
value: 44.6166
- type: nauc_precision_at_10_std
value: 46.4231
- type: nauc_precision_at_10_diff1
value: -7.435700000000001
- type: nauc_precision_at_20_max
value: 41.1852
- type: nauc_precision_at_20_std
value: 51.89359999999999
- type: nauc_precision_at_20_diff1
value: -14.6318
- type: nauc_precision_at_100_max
value: 33.6188
- type: nauc_precision_at_100_std
value: 60.621700000000004
- type: nauc_precision_at_100_diff1
value: -26.376500000000004
- type: nauc_precision_at_1000_max
value: 33.2976
- type: nauc_precision_at_1000_std
value: 60.4508
- type: nauc_precision_at_1000_diff1
value: -27.829500000000003
- type: nauc_mrr_at_1_max
value: 55.373099999999994
- type: nauc_mrr_at_1_std
value: -11.362
- type: nauc_mrr_at_1_diff1
value: 81.5232
- type: nauc_mrr_at_3_max
value: 58.7918
- type: nauc_mrr_at_3_std
value: -11.580300000000001
- type: nauc_mrr_at_3_diff1
value: 78.79310000000001
- type: nauc_mrr_at_5_max
value: 58.13270000000001
- type: nauc_mrr_at_5_std
value: -11.2346
- type: nauc_mrr_at_5_diff1
value: 78.8896
- type: nauc_mrr_at_10_max
value: 58.4249
- type: nauc_mrr_at_10_std
value: -11.842600000000001
- type: nauc_mrr_at_10_diff1
value: 79.0629
- type: nauc_mrr_at_20_max
value: 58.4409
- type: nauc_mrr_at_20_std
value: -11.7012
- type: nauc_mrr_at_20_diff1
value: 79.1097
- type: nauc_mrr_at_100_max
value: 58.3685
- type: nauc_mrr_at_100_std
value: -11.6333
- type: nauc_mrr_at_100_diff1
value: 79.1871
- type: nauc_mrr_at_1000_max
value: 58.3686
- type: nauc_mrr_at_1000_std
value: -11.6365
- type: nauc_mrr_at_1000_diff1
value: 79.1874
- type: main_score
value: 80.372
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: similarity_accuracy
value: 99.8317
- type: similarity_accuracy_threshold
value: 87.7015
- type: similarity_f1
value: 91.2821
- type: similarity_f1_threshold
value: 87.68599999999999
- type: similarity_precision
value: 93.68419999999999
- type: similarity_recall
value: 89.0
- type: similarity_ap
value: 96.249
- type: cosine_accuracy
value: 99.8317
- type: cosine_accuracy_threshold
value: 87.7015
- type: cosine_f1
value: 91.2821
- type: cosine_f1_threshold
value: 87.68599999999999
- type: cosine_precision
value: 93.68419999999999
- type: cosine_recall
value: 89.0
- type: cosine_ap
value: 96.249
- type: manhattan_accuracy
value: 99.83370000000001
- type: manhattan_accuracy_threshold
value: 4133.4675
- type: manhattan_f1
value: 91.52369999999999
- type: manhattan_f1_threshold
value: 4244.392400000001
- type: manhattan_precision
value: 92.3625
- type: manhattan_recall
value: 90.7
- type: manhattan_ap
value: 96.4021
- type: euclidean_accuracy
value: 99.83070000000001
- type: euclidean_accuracy_threshold
value: 49.684200000000004
- type: euclidean_f1
value: 91.24419999999999
- type: euclidean_f1_threshold
value: 49.7175
- type: euclidean_precision
value: 93.4942
- type: euclidean_recall
value: 89.1
- type: euclidean_ap
value: 96.24589999999999
- type: dot_accuracy
value: 99.8277
- type: dot_accuracy_threshold
value: 87.3743
- type: dot_f1
value: 91.1495
- type: dot_f1_threshold
value: 87.3743
- type: dot_precision
value: 92.7536
- type: dot_recall
value: 89.60000000000001
- type: dot_ap
value: 96.23509999999999
- type: max_accuracy
value: 99.83370000000001
- type: max_f1
value: 91.52369999999999
- type: max_precision
value: 93.68419999999999
- type: max_recall
value: 90.7
- type: max_ap
value: 96.4021
- type: main_score
value: 96.4021
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 80.19420000000001
- type: v_measure_std
value: 2.8956
- type: main_score
value: 80.19420000000001
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.416900000000005
- type: v_measure_std
value: 1.5938
- type: main_score
value: 49.416900000000005
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.579
- type: mrr
value: 55.598800000000004
- type: nAUC_map_max
value: 9.1665
- type: nAUC_map_std
value: 7.8292
- type: nAUC_map_diff1
value: 38.831199999999995
- type: nAUC_mrr_max
value: 9.7435
- type: nAUC_mrr_std
value: 8.030800000000001
- type: nAUC_mrr_diff1
value: 39.2785
- type: main_score
value: 54.579
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: pearson
value: 31.6006
- type: spearman
value: 31.4212
- type: cosine_spearman
value: 31.4212
- type: cosine_pearson
value: 31.6006
- type: dot_spearman
value: 31.5459
- type: dot_pearson
value: 31.6412
- type: main_score
value: 31.4212
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_3
value: 89.592
- type: ndcg_at_5
value: 88.596
- type: ndcg_at_10
value: 86.732
- type: ndcg_at_20
value: 84.08099999999999
- type: ndcg_at_100
value: 70.204
- type: ndcg_at_1000
value: 63.29
- type: map_at_1
value: 0.243
- type: map_at_3
value: 0.703
- type: map_at_5
value: 1.142
- type: map_at_10
value: 2.2270000000000003
- type: map_at_20
value: 4.176
- type: map_at_100
value: 15.104999999999999
- type: map_at_1000
value: 37.759
- type: recall_at_1
value: 0.243
- type: recall_at_3
value: 0.726
- type: recall_at_5
value: 1.192
- type: recall_at_10
value: 2.359
- type: recall_at_20
value: 4.539
- type: recall_at_100
value: 17.802
- type: recall_at_1000
value: 59.906000000000006
- type: precision_at_1
value: 94.0
- type: precision_at_3
value: 93.333
- type: precision_at_5
value: 91.60000000000001
- type: precision_at_10
value: 90.0
- type: precision_at_20
value: 87.6
- type: precision_at_100
value: 72.42
- type: precision_at_1000
value: 27.842
- type: mrr_at_1
value: 94.0
- type: mrr_at_3
value: 97.0
- type: mrr_at_5
value: 97.0
- type: mrr_at_10
value: 97.0
- type: mrr_at_20
value: 97.0
- type: mrr_at_100
value: 97.0
- type: mrr_at_1000
value: 97.0
- type: nauc_ndcg_at_1_max
value: 53.2855
- type: nauc_ndcg_at_1_std
value: 57.4638
- type: nauc_ndcg_at_1_diff1
value: 4.4059
- type: nauc_ndcg_at_3_max
value: 37.2953
- type: nauc_ndcg_at_3_std
value: 46.1686
- type: nauc_ndcg_at_3_diff1
value: 17.6265
- type: nauc_ndcg_at_5_max
value: 30.1947
- type: nauc_ndcg_at_5_std
value: 53.0996
- type: nauc_ndcg_at_5_diff1
value: 16.5411
- type: nauc_ndcg_at_10_max
value: 36.923
- type: nauc_ndcg_at_10_std
value: 58.658100000000005
- type: nauc_ndcg_at_10_diff1
value: 13.2686
- type: nauc_ndcg_at_20_max
value: 40.2471
- type: nauc_ndcg_at_20_std
value: 62.5481
- type: nauc_ndcg_at_20_diff1
value: 5.9675
- type: nauc_ndcg_at_100_max
value: 38.2307
- type: nauc_ndcg_at_100_std
value: 67.91329999999999
- type: nauc_ndcg_at_100_diff1
value: -12.2539
- type: nauc_ndcg_at_1000_max
value: 41.875
- type: nauc_ndcg_at_1000_std
value: 75.5431
- type: nauc_ndcg_at_1000_diff1
value: -2.8021000000000003
- type: nauc_map_at_1_max
value: -8.0287
- type: nauc_map_at_1_std
value: 26.559500000000003
- type: nauc_map_at_1_diff1
value: 13.9958
- type: nauc_map_at_3_max
value: -3.7122
- type: nauc_map_at_3_std
value: 30.8674
- type: nauc_map_at_3_diff1
value: 25.9597
- type: nauc_map_at_5_max
value: -2.6957
- type: nauc_map_at_5_std
value: 34.3742
- type: nauc_map_at_5_diff1
value: 24.567
- type: nauc_map_at_10_max
value: -1.028
- type: nauc_map_at_10_std
value: 38.6623
- type: nauc_map_at_10_diff1
value: 23.482
- type: nauc_map_at_20_max
value: 0.9729
- type: nauc_map_at_20_std
value: 41.4966
- type: nauc_map_at_20_diff1
value: 21.6255
- type: nauc_map_at_100_max
value: 16.6328
- type: nauc_map_at_100_std
value: 62.829100000000004
- type: nauc_map_at_100_diff1
value: 5.6174
- type: nauc_map_at_1000_max
value: 40.1756
- type: nauc_map_at_1000_std
value: 74.6962
- type: nauc_map_at_1000_diff1
value: -8.3158
- type: nauc_recall_at_1_max
value: -8.0287
- type: nauc_recall_at_1_std
value: 26.559500000000003
- type: nauc_recall_at_1_diff1
value: 13.9958
- type: nauc_recall_at_3_max
value: -6.4821
- type: nauc_recall_at_3_std
value: 24.8695
- type: nauc_recall_at_3_diff1
value: 25.6479
- type: nauc_recall_at_5_max
value: -6.8088999999999995
- type: nauc_recall_at_5_std
value: 28.340700000000002
- type: nauc_recall_at_5_diff1
value: 22.797700000000003
- type: nauc_recall_at_10_max
value: -5.6834999999999996
- type: nauc_recall_at_10_std
value: 30.5569
- type: nauc_recall_at_10_diff1
value: 21.330099999999998
- type: nauc_recall_at_20_max
value: -5.9924
- type: nauc_recall_at_20_std
value: 30.4601
- type: nauc_recall_at_20_diff1
value: 19.365299999999998
- type: nauc_recall_at_100_max
value: 7.007099999999999
- type: nauc_recall_at_100_std
value: 48.9423
- type: nauc_recall_at_100_diff1
value: 9.1493
- type: nauc_recall_at_1000_max
value: 37.2208
- type: nauc_recall_at_1000_std
value: 64.91550000000001
- type: nauc_recall_at_1000_diff1
value: -2.8935
- type: nauc_precision_at_1_max
value: 42.343599999999995
- type: nauc_precision_at_1_std
value: 69.3433
- type: nauc_precision_at_1_diff1
value: -20.028000000000002
- type: nauc_precision_at_3_max
value: 49.335699999999996
- type: nauc_precision_at_3_std
value: 45.994
- type: nauc_precision_at_3_diff1
value: 22.622400000000003
- type: nauc_precision_at_5_max
value: 38.3816
- type: nauc_precision_at_5_std
value: 55.8298
- type: nauc_precision_at_5_diff1
value: 8.4025
- type: nauc_precision_at_10_max
value: 41.6808
- type: nauc_precision_at_10_std
value: 66.1322
- type: nauc_precision_at_10_diff1
value: 4.665699999999999
- type: nauc_precision_at_20_max
value: 42.0959
- type: nauc_precision_at_20_std
value: 61.7217
- type: nauc_precision_at_20_diff1
value: -3.4937000000000005
- type: nauc_precision_at_100_max
value: 40.4606
- type: nauc_precision_at_100_std
value: 63.3467
- type: nauc_precision_at_100_diff1
value: -16.487199999999998
- type: nauc_precision_at_1000_max
value: 39.4253
- type: nauc_precision_at_1000_std
value: 18.8794
- type: nauc_precision_at_1000_diff1
value: -18.4566
- type: nauc_mrr_at_1_max
value: 42.343599999999995
- type: nauc_mrr_at_1_std
value: 69.3433
- type: nauc_mrr_at_1_diff1
value: -20.028000000000002
- type: nauc_mrr_at_3_max
value: 42.343599999999995
- type: nauc_mrr_at_3_std
value: 69.3433
- type: nauc_mrr_at_3_diff1
value: -20.028000000000002
- type: nauc_mrr_at_5_max
value: 42.343599999999995
- type: nauc_mrr_at_5_std
value: 69.3433
- type: nauc_mrr_at_5_diff1
value: -20.028000000000002
- type: nauc_mrr_at_10_max
value: 42.343599999999995
- type: nauc_mrr_at_10_std
value: 69.3433
- type: nauc_mrr_at_10_diff1
value: -20.028000000000002
- type: nauc_mrr_at_20_max
value: 42.343599999999995
- type: nauc_mrr_at_20_std
value: 69.3433
- type: nauc_mrr_at_20_diff1
value: -20.028000000000002
- type: nauc_mrr_at_100_max
value: 42.343599999999995
- type: nauc_mrr_at_100_std
value: 69.3433
- type: nauc_mrr_at_100_diff1
value: -20.028000000000002
- type: nauc_mrr_at_1000_max
value: 42.343599999999995
- type: nauc_mrr_at_1000_std
value: 69.3433
- type: nauc_mrr_at_1000_diff1
value: -20.028000000000002
- type: main_score
value: 86.732
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: ndcg_at_1
value: 37.755
- type: ndcg_at_3
value: 37.185
- type: ndcg_at_5
value: 32.564
- type: ndcg_at_10
value: 31.395
- type: ndcg_at_20
value: 30.494
- type: ndcg_at_100
value: 42.617
- type: ndcg_at_1000
value: 53.324000000000005
- type: map_at_1
value: 2.7640000000000002
- type: map_at_3
value: 6.970999999999999
- type: map_at_5
value: 8.518
- type: map_at_10
value: 11.919
- type: map_at_20
value: 14.389
- type: map_at_100
value: 18.799
- type: map_at_1000
value: 20.366
- type: recall_at_1
value: 2.7640000000000002
- type: recall_at_3
value: 8.323
- type: recall_at_5
value: 11.259
- type: recall_at_10
value: 19.088
- type: recall_at_20
value: 26.488
- type: recall_at_100
value: 53.437
- type: recall_at_1000
value: 86.385
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_3
value: 38.095
- type: precision_at_5
value: 31.429000000000002
- type: precision_at_10
value: 27.346999999999998
- type: precision_at_20
value: 19.184
- type: precision_at_100
value: 8.713999999999999
- type: precision_at_1000
value: 1.6119999999999999
- type: mrr_at_1
value: 38.7755
- type: mrr_at_3
value: 52.7211
- type: mrr_at_5
value: 55.9864
- type: mrr_at_10
value: 57.2554
- type: mrr_at_20
value: 57.382999999999996
- type: mrr_at_100
value: 57.5539
- type: mrr_at_1000
value: 57.5539
- type: nauc_ndcg_at_1_max
value: -20.4965
- type: nauc_ndcg_at_1_std
value: -0.3794
- type: nauc_ndcg_at_1_diff1
value: 24.9951
- type: nauc_ndcg_at_3_max
value: -26.1758
- type: nauc_ndcg_at_3_std
value: 12.2535
- type: nauc_ndcg_at_3_diff1
value: 15.765199999999998
- type: nauc_ndcg_at_5_max
value: -27.4523
- type: nauc_ndcg_at_5_std
value: 5.7325
- type: nauc_ndcg_at_5_diff1
value: 12.3423
- type: nauc_ndcg_at_10_max
value: -24.3957
- type: nauc_ndcg_at_10_std
value: 3.526
- type: nauc_ndcg_at_10_diff1
value: 7.2427
- type: nauc_ndcg_at_20_max
value: -20.866799999999998
- type: nauc_ndcg_at_20_std
value: 6.1578
- type: nauc_ndcg_at_20_diff1
value: 4.1767
- type: nauc_ndcg_at_100_max
value: -17.9588
- type: nauc_ndcg_at_100_std
value: 33.1442
- type: nauc_ndcg_at_100_diff1
value: 4.356199999999999
- type: nauc_ndcg_at_1000_max
value: -11.1072
- type: nauc_ndcg_at_1000_std
value: 35.9222
- type: nauc_ndcg_at_1000_diff1
value: 5.9171000000000005
- type: nauc_map_at_1_max
value: -22.354
- type: nauc_map_at_1_std
value: -10.9271
- type: nauc_map_at_1_diff1
value: 27.7288
- type: nauc_map_at_3_max
value: -22.397
- type: nauc_map_at_3_std
value: -7.2767
- type: nauc_map_at_3_diff1
value: 15.101899999999999
- type: nauc_map_at_5_max
value: -25.7693
- type: nauc_map_at_5_std
value: -10.6509
- type: nauc_map_at_5_diff1
value: 6.9317
- type: nauc_map_at_10_max
value: -22.2658
- type: nauc_map_at_10_std
value: -7.2035
- type: nauc_map_at_10_diff1
value: 3.9056
- type: nauc_map_at_20_max
value: -16.2962
- type: nauc_map_at_20_std
value: -2.595
- type: nauc_map_at_20_diff1
value: 0.5059
- type: nauc_map_at_100_max
value: -15.7023
- type: nauc_map_at_100_std
value: 11.6893
- type: nauc_map_at_100_diff1
value: -0.6727000000000001
- type: nauc_map_at_1000_max
value: -14.162700000000001
- type: nauc_map_at_1000_std
value: 14.4614
- type: nauc_map_at_1000_diff1
value: -0.3813
- type: nauc_recall_at_1_max
value: -22.354
- type: nauc_recall_at_1_std
value: -10.9271
- type: nauc_recall_at_1_diff1
value: 27.7288
- type: nauc_recall_at_3_max
value: -25.067
- type: nauc_recall_at_3_std
value: -3.5932
- type: nauc_recall_at_3_diff1
value: 7.6448
- type: nauc_recall_at_5_max
value: -31.9521
- type: nauc_recall_at_5_std
value: -12.973299999999998
- type: nauc_recall_at_5_diff1
value: -0.8931
- type: nauc_recall_at_10_max
value: -26.1132
- type: nauc_recall_at_10_std
value: -7.3447
- type: nauc_recall_at_10_diff1
value: -4.7741999999999996
- type: nauc_recall_at_20_max
value: -18.1067
- type: nauc_recall_at_20_std
value: 2.8136
- type: nauc_recall_at_20_diff1
value: -10.4605
- type: nauc_recall_at_100_max
value: -16.423199999999998
- type: nauc_recall_at_100_std
value: 44.8333
- type: nauc_recall_at_100_diff1
value: -6.1889
- type: nauc_recall_at_1000_max
value: 8.706
- type: nauc_recall_at_1000_std
value: 74.9427
- type: nauc_recall_at_1000_diff1
value: -11.8207
- type: nauc_precision_at_1_max
value: -17.1924
- type: nauc_precision_at_1_std
value: 0.0948
- type: nauc_precision_at_1_diff1
value: 28.084300000000002
- type: nauc_precision_at_3_max
value: -23.0318
- type: nauc_precision_at_3_std
value: 14.8594
- type: nauc_precision_at_3_diff1
value: 12.4551
- type: nauc_precision_at_5_max
value: -26.849800000000002
- type: nauc_precision_at_5_std
value: 2.2552
- type: nauc_precision_at_5_diff1
value: 6.4963999999999995
- type: nauc_precision_at_10_max
value: -14.1258
- type: nauc_precision_at_10_std
value: 9.764299999999999
- type: nauc_precision_at_10_diff1
value: 2.9594
- type: nauc_precision_at_20_max
value: 0.6306
- type: nauc_precision_at_20_std
value: 24.2147
- type: nauc_precision_at_20_diff1
value: -3.5282
- type: nauc_precision_at_100_max
value: 11.4275
- type: nauc_precision_at_100_std
value: 71.0325
- type: nauc_precision_at_100_diff1
value: -6.978800000000001
- type: nauc_precision_at_1000_max
value: 37.155899999999995
- type: nauc_precision_at_1000_std
value: 20.5794
- type: nauc_precision_at_1000_diff1
value: -11.6191
- type: nauc_mrr_at_1_max
value: -17.1924
- type: nauc_mrr_at_1_std
value: 0.0948
- type: nauc_mrr_at_1_diff1
value: 28.084300000000002
- type: nauc_mrr_at_3_max
value: -26.1233
- type: nauc_mrr_at_3_std
value: 12.6302
- type: nauc_mrr_at_3_diff1
value: 17.8467
- type: nauc_mrr_at_5_max
value: -23.641499999999997
- type: nauc_mrr_at_5_std
value: 8.3047
- type: nauc_mrr_at_5_diff1
value: 20.5063
- type: nauc_mrr_at_10_max
value: -23.5564
- type: nauc_mrr_at_10_std
value: 8.5333
- type: nauc_mrr_at_10_diff1
value: 20.5287
- type: nauc_mrr_at_20_max
value: -23.154700000000002
- type: nauc_mrr_at_20_std
value: 9.079600000000001
- type: nauc_mrr_at_20_diff1
value: 21.3888
- type: nauc_mrr_at_100_max
value: -23.3008
- type: nauc_mrr_at_100_std
value: 8.7424
- type: nauc_mrr_at_100_diff1
value: 21.2832
- type: nauc_mrr_at_1000_max
value: -23.3008
- type: nauc_mrr_at_1000_std
value: 8.7424
- type: nauc_mrr_at_1000_diff1
value: 21.2832
- type: main_score
value: 31.395
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 91.3135
- type: f1
value: 78.12480000000001
- type: f1_weighted
value: 92.4208
- type: ap
value: 41.943599999999996
- type: ap_weighted
value: 41.943599999999996
- type: main_score
value: 91.3135
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 77.24109999999999
- type: f1
value: 77.4964
- type: f1_weighted
value: 77.0248
- type: main_score
value: 77.24109999999999
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 63.132200000000005
- type: v_measure_std
value: 0.7163
- type: main_score
value: 63.132200000000005
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: similarity_accuracy
value: 88.061
- type: similarity_accuracy_threshold
value: 85.9546
- type: similarity_f1
value: 73.36240000000001
- type: similarity_f1_threshold
value: 84.0127
- type: similarity_precision
value: 71.47149999999999
- type: similarity_recall
value: 75.3562
- type: similarity_ap
value: 80.2787
- type: cosine_accuracy
value: 88.061
- type: cosine_accuracy_threshold
value: 85.9546
- type: cosine_f1
value: 73.36240000000001
- type: cosine_f1_threshold
value: 84.0127
- type: cosine_precision
value: 71.47149999999999
- type: cosine_recall
value: 75.3562
- type: cosine_ap
value: 80.2787
- type: manhattan_accuracy
value: 87.8941
- type: manhattan_accuracy_threshold
value: 4374.3263
- type: manhattan_f1
value: 72.929
- type: manhattan_f1_threshold
value: 4637.7289
- type: manhattan_precision
value: 71.3636
- type: manhattan_recall
value: 74.5646
- type: manhattan_ap
value: 79.98140000000001
- type: euclidean_accuracy
value: 88.0789
- type: euclidean_accuracy_threshold
value: 52.9742
- type: euclidean_f1
value: 73.3591
- type: euclidean_f1_threshold
value: 56.4815
- type: euclidean_precision
value: 71.608
- type: euclidean_recall
value: 75.19789999999999
- type: euclidean_ap
value: 80.2656
- type: dot_accuracy
value: 88.0789
- type: dot_accuracy_threshold
value: 85.8034
- type: dot_f1
value: 73.40379999999999
- type: dot_f1_threshold
value: 84.3213
- type: dot_precision
value: 72.6544
- type: dot_recall
value: 74.16890000000001
- type: dot_ap
value: 80.32010000000001
- type: max_accuracy
value: 88.0789
- type: max_f1
value: 73.40379999999999
- type: max_precision
value: 72.6544
- type: max_recall
value: 75.3562
- type: max_ap
value: 80.32010000000001
- type: main_score
value: 80.32010000000001
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: similarity_accuracy
value: 89.93090000000001
- type: similarity_accuracy_threshold
value: 82.1349
- type: similarity_f1
value: 80.1377
- type: similarity_f1_threshold
value: 80.5047
- type: similarity_precision
value: 77.1539
- type: similarity_recall
value: 83.3616
- type: similarity_ap
value: 87.6917
- type: cosine_accuracy
value: 89.93090000000001
- type: cosine_accuracy_threshold
value: 82.1349
- type: cosine_f1
value: 80.1377
- type: cosine_f1_threshold
value: 80.5047
- type: cosine_precision
value: 77.1539
- type: cosine_recall
value: 83.3616
- type: cosine_ap
value: 87.6917
- type: manhattan_accuracy
value: 89.86880000000001
- type: manhattan_accuracy_threshold
value: 4882.7347
- type: manhattan_f1
value: 80.2323
- type: manhattan_f1_threshold
value: 5185.1944
- type: manhattan_precision
value: 76.74889999999999
- type: manhattan_recall
value: 84.0468
- type: manhattan_ap
value: 87.70750000000001
- type: euclidean_accuracy
value: 89.94640000000001
- type: euclidean_accuracy_threshold
value: 59.9149
- type: euclidean_f1
value: 80.1527
- type: euclidean_f1_threshold
value: 62.3611
- type: euclidean_precision
value: 77.2744
- type: euclidean_recall
value: 83.2538
- type: euclidean_ap
value: 87.6922
- type: dot_accuracy
value: 89.9038
- type: dot_accuracy_threshold
value: 82.53049999999999
- type: dot_f1
value: 80.0969
- type: dot_f1_threshold
value: 80.7285
- type: dot_precision
value: 77.5853
- type: dot_recall
value: 82.77640000000001
- type: dot_ap
value: 87.668
- type: max_accuracy
value: 89.94640000000001
- type: max_f1
value: 80.2323
- type: max_precision
value: 77.5853
- type: max_recall
value: 84.0468
- type: max_ap
value: 87.70750000000001
- type: main_score
value: 87.70750000000001
---
## Introduction
Based on dunzhang/stella_en_1.5B_v5 and google/siglip-so400m-patch14-384.
It can encode both text and images.
**Report:** https://arxiv.org/abs/2412.19048
**Codes:** https://github.com/NLPJCL/RAG-Retrieval
**Data:** https://huggingface.co/datasets/infgrad/jasper_text_distill_dataset
**Training logs:** https://api.wandb.ai/links/dunnzhang0/z8jqoqpb
The core idea of jasper and stella is distillation: **Let student model learn teacher model's vectors.**
## Usage
```python
import torch
from sentence_transformers import SentenceTransformer
DOC1 = """
Blue light is scattered in all directions by the tiny molecules of air in Earth's atmosphere.
Blue is scattered more than other colors because it travels as shorter, smaller waves. This is why we see a blue sky most of the time.
Closer to the horizon, the sky fades to a lighter blue or white.
"""
DOC2 = """
When choosing colors, you can consider the following factors:
Color theory: Understand how colors work together and how they can evoke different reactions.
Color psychology: Consider how colors affect emotions, behaviors, and responses.
Brand identity: Colors can convey meaning and information about a brand.
Mood: Consider the mood you want to create. For example, brighter colors can feel cheerful, while cooler colors can be calming.
Space: Consider the size of the space and the amount of natural light it receives. Dark colors can make a room feel smaller, while light colors can make it feel larger.
Color wheel: Use the color wheel to identify primary, secondary, and tertiary colors.
Color combinations: Decide how to best complement your preferred color with others.
Color palette: Limit your color palette to a main color and one or two additional colors.
60-30-10 rule: Use a primary color 60% of the time, a secondary color 30% of the time, and an accent color 10% of the time
"""
if __name__ == "__main__":
# load model
use_gpu = False
model_name = "infgrad/jasper_en_vision_language_v1"
model = SentenceTransformer(
model_name,
trust_remote_code=True,
device="cpu" if not use_gpu else "cuda",
model_kwargs={
"torch_dtype": torch.bfloat16 if use_gpu else torch.float32,
"attn_implementation": "sdpa"
},
# vector_dim must be 12288, 1024, 512, 256
## 1024 is recommended
# set is_text_encoder 'True', if you do not encode image
config_kwargs={"is_text_encoder": False, "vector_dim": 1024},
)
# We can reduce the max_seq_length from the default of 2048 for faster encoding
model.max_seq_length = 1024
# data
q_list = [
"Why the sky is blue?",
"how to choose suitable color",
]
doc_list = [
DOC1,
[{"type": "image_path", "content": "./assets/img1.png"}, {"type": "text", "content": "Hope this image helps!"}],
DOC2,
[{"type": "image_path", "content": "./assets/img2.png"}],
]
q_vecs = model.encode(q_list, prompt_name="s2p_query")
doc_vecs = model.encode(doc_list)
# calculate similarity
similarities = model.similarity(q_vecs, doc_vecs)
print(similarities)
# the output is:
# tensor([[0.7775, 0.7594, 0.2429, 0.2187],
# [0.3226, 0.3054, 0.7421, 0.5484]])
```
## Evaluation on MTEB
script: ./scripts/evaluate_en_mteb/run_evaluate_mteb.py
## License
**This model should not be used for any commercial purpose!**
## Citation
```
@misc{zhang2025jasperstelladistillationsota,
title={Jasper and Stella: distillation of SOTA embedding models},
author={Dun Zhang and Jiacheng Li and Ziyang Zeng and Fulong Wang},
year={2025},
eprint={2412.19048},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2412.19048},
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
IEETA/Multi-Head-CRF | IEETA | null | [
"es",
"dataset:IEETA/SPACCC-Spanish-NER",
"license:mit",
"region:us"
] | 1,715 | 1,715 | 0 | 0 | ---
datasets:
- IEETA/SPACCC-Spanish-NER
language:
- es
license: mit
metrics:
- f1
---
# Model Card for Biomedical Named Entity Recognition in Spanish Clinical Texts
Our model focuses on Biomedical Named Entity Recognition (NER) in Spanish clinical texts, crucial for automated information extraction in medical research and treatment improvements. It proposes a novel approach using a Multi-Head Conditional Random Field (CRF) classifier to tackle multi-class NER tasks, overcoming challenges of overlapping entity instances. The classes it recognizes include symptoms, procedures, diseases, chemicals, and proteins.
We provide 4 different models, available as branches of this repository.
## Model Details
### Model Description
- **Developed by:** IEETA
- **Model type:** Multi-Head-CRF, Roberta Base
- **Language(s) (NLP):** Spanish
- **License:** MIT
- **Finetuned from model:** lcampillos/roberta-es-clinical-trials-ner
### Model Sources
- **Repository:** [IEETA Multi-Head-CRF GitHub](https://github.com/ieeta-pt/Multi-Head-CRF)
- **Paper:** Multi-head CRF classifier for biomedical multi-class Named Entity Recognition on Spanish clinical notes [Awaiting Publication]
**Authors:**
- Richard A A Jonker ([ORCID: 0000-0002-3806-6940](https://orcid.org/0000-0002-3806-6940))
- Tiago Almeida ([ORCID: 0000-0002-4258-3350](https://orcid.org/0000-0002-4258-3350))
- Rui Antunes ([ORCID: 0000-0003-3533-8872](https://orcid.org/0000-0003-3533-8872))
- João R Almeida ([ORCID: 0000-0003-0729-2264](https://orcid.org/0000-0003-0729-2264))
- Sérgio Matos ([ORCID: 0000-0003-1941-3983](https://orcid.org/0000-0003-1941-3983))
## Uses
Note we do not take any liability for the use of the model in any professional/medical domain. The model is intended for academic purposes only. It performs Named Entity Recognition over 5 classes namely: SYMPTOM PROCEDURE DISEASE PROTEIN CHEMICAL
## How to Get Started with the Model
Please refer to our GitHub repository for more information on how to train the model and run inference: [IEETA Multi-Head-CRF GitHub](https://github.com/ieeta-pt/Multi-Head-CRF)
## Training Details
### Training Data
The training data can be found on IEETA/SPACCC-Spanish-NER, which is further described on the dataset card.
The dataset used consists of 4 seperate datasets:
- [SympTEMIST](https://zenodo.org/records/10635215)
- [MedProcNER](https://zenodo.org/records/8224056)
- [DisTEMIST](https://zenodo.org/records/7614764)
- [PharmaCoNER](https://zenodo.org/records/4270158)
### Speeds, Sizes, Times
The models were trained using an Nvidia Quadro RTX 8000. The models for 5 classes took approximately 1 hour to train and occupy around 1GB of disk space. Additionally, this model shows linear complexity (+8 minutes) per entity class to classify.
### Testing Data, Factors & Metrics
#### Testing Data
The testing data can be found on IEETA/SPACCC-Spanish-NER, which is further described on the dataset card.
#### Metrics
The models were evaluated using the micro-averaged F1-score metric, the standard for entity recognition tasks.
### Results
We provide 4 separate models with various hyperparameter changes:
| HLs per head | Augmentation | Percentage Tags | Augmentation Probability | F1 |
|--------------|--------------|-----------------|--------------------------|--------|
| 3 | Random | 0.25 | 0.50 | 78.73 |
| 3 | Unknown | 0.50 | 0.25 | 78.50 |
| 3 | None | - | - | **78.89** |
| 1 | Random | 0.25 | 0.50 | **78.89** |
All models are trained with a context size of 32 tokens for 60 epochs.
## Citation
**BibTeX:**
[Awaiting Publication]
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"DISTEMIST",
"PHARMACONER",
"SYMPTEMIST"
] | BioNLP |
vprelovac/universal-sentence-encoder-large-5 | vprelovac | null | [
"mteb",
"model-index",
"region:us"
] | 1,682 | 1,682 | 0 | 1 | ---
tags:
- mteb
model-index:
- name: universal-sentence-encoder-large-5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.19402985074628
- type: ap
value: 39.249966888759666
- type: f1
value: 70.17510532980124
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 69.6285
- type: ap
value: 63.97317997322299
- type: f1
value: 69.48624121982243
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 35.534
- type: f1
value: 34.974303844745194
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 34.718110225806626
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 25.267234486849127
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 69.65040443392367
- type: cos_sim_spearman
value: 69.35579718635816
- type: euclidean_pearson
value: 68.74078260783044
- type: euclidean_spearman
value: 69.35579718635816
- type: manhattan_pearson
value: 68.97023207188357
- type: manhattan_spearman
value: 69.2063961917937
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.12987012987013
- type: f1
value: 77.40193921057201
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 28.39184796722482
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 20.5151608432177
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.48
- type: f1
value: 41.2632839288363
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 64.0552
- type: ap
value: 59.25851636836455
- type: f1
value: 63.90501571634165
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.94117647058823
- type: f1
value: 92.7110107115347
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.43456452348381
- type: f1
value: 52.53178214778298
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.68796234028245
- type: f1
value: 68.47828954699564
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.20242098184264
- type: f1
value: 76.27977367157321
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.266855488757034
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 24.580327378539057
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.928616405043684
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 58.94536303256525
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.43899708996477
- type: cos_sim_spearman
value: 76.84011555220044
- type: euclidean_pearson
value: 79.6116260676631
- type: euclidean_spearman
value: 76.84012073472658
- type: manhattan_pearson
value: 78.49980966442152
- type: manhattan_spearman
value: 75.49233078465171
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 79.8291506264289
- type: cos_sim_spearman
value: 72.49093632759003
- type: euclidean_pearson
value: 75.42130137819414
- type: euclidean_spearman
value: 72.49048089395136
- type: manhattan_pearson
value: 74.17957476459091
- type: manhattan_spearman
value: 71.6143674273714
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 70.91903439531401
- type: cos_sim_spearman
value: 73.65106317244273
- type: euclidean_pearson
value: 73.22383725261588
- type: euclidean_spearman
value: 73.65106317244273
- type: manhattan_pearson
value: 72.98314057093636
- type: manhattan_spearman
value: 73.52101907069579
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 75.19632733755482
- type: cos_sim_spearman
value: 71.88328402076041
- type: euclidean_pearson
value: 74.02395011081532
- type: euclidean_spearman
value: 71.88328903479953
- type: manhattan_pearson
value: 73.52941749980135
- type: manhattan_spearman
value: 71.32905921324534
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 82.42736501667461
- type: cos_sim_spearman
value: 82.89997148218205
- type: euclidean_pearson
value: 82.3189209945513
- type: euclidean_spearman
value: 82.89997089267106
- type: manhattan_pearson
value: 81.78597437071429
- type: manhattan_spearman
value: 82.21582873302081
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 78.44968010602165
- type: cos_sim_spearman
value: 79.82626284236876
- type: euclidean_pearson
value: 79.4157474030238
- type: euclidean_spearman
value: 79.82626269881543
- type: manhattan_pearson
value: 79.13275737559012
- type: manhattan_spearman
value: 79.4847570398719
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.51882547098218
- type: cos_sim_spearman
value: 85.19309361840223
- type: euclidean_pearson
value: 84.78417242196153
- type: euclidean_spearman
value: 85.19307726106497
- type: manhattan_pearson
value: 84.09108278425708
- type: manhattan_spearman
value: 84.13590986630149
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 44.814384769251085
- type: cos_sim_spearman
value: 48.43949857027059
- type: euclidean_pearson
value: 47.479132435178855
- type: euclidean_spearman
value: 48.43949857027059
- type: manhattan_pearson
value: 47.16203934707649
- type: manhattan_spearman
value: 48.289920897667095
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 81.25646447054616
- type: cos_sim_spearman
value: 79.93231051166357
- type: euclidean_pearson
value: 80.65225742476945
- type: euclidean_spearman
value: 79.93231051166357
- type: manhattan_pearson
value: 79.84341819764376
- type: manhattan_spearman
value: 79.07650150491334
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.5910891089109
- type: cos_sim_ap
value: 84.37184771930944
- type: cos_sim_f1
value: 78.78787878787878
- type: cos_sim_precision
value: 80.99260823653644
- type: cos_sim_recall
value: 76.7
- type: dot_accuracy
value: 99.5910891089109
- type: dot_ap
value: 84.37184771930944
- type: dot_f1
value: 78.78787878787878
- type: dot_precision
value: 80.99260823653644
- type: dot_recall
value: 76.7
- type: euclidean_accuracy
value: 99.5910891089109
- type: euclidean_ap
value: 84.37185436709098
- type: euclidean_f1
value: 78.78787878787878
- type: euclidean_precision
value: 80.99260823653644
- type: euclidean_recall
value: 76.7
- type: manhattan_accuracy
value: 99.6108910891089
- type: manhattan_ap
value: 85.13355467581354
- type: manhattan_f1
value: 80.2788844621514
- type: manhattan_precision
value: 79.96031746031747
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.6108910891089
- type: max_ap
value: 85.13355467581354
- type: max_f1
value: 80.2788844621514
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 60.8469558550317
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.14392913702168
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.566148619704457
- type: cos_sim_spearman
value: 29.01201818902588
- type: dot_pearson
value: 29.566149876183374
- type: dot_spearman
value: 29.014046950422795
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.17420000000001
- type: ap
value: 13.49623412034604
- type: f1
value: 53.7079366494688
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.309564233163556
- type: f1
value: 59.33623172630094
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 42.42960819361032
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.04500208618943
- type: cos_sim_ap
value: 70.12785509302904
- type: cos_sim_f1
value: 65.36573392243496
- type: cos_sim_precision
value: 61.10601193207894
- type: cos_sim_recall
value: 70.26385224274406
- type: dot_accuracy
value: 85.04500208618943
- type: dot_ap
value: 70.12785837450095
- type: dot_f1
value: 65.36573392243496
- type: dot_precision
value: 61.10601193207894
- type: dot_recall
value: 70.26385224274406
- type: euclidean_accuracy
value: 85.04500208618943
- type: euclidean_ap
value: 70.1278575285826
- type: euclidean_f1
value: 65.36573392243496
- type: euclidean_precision
value: 61.10601193207894
- type: euclidean_recall
value: 70.26385224274406
- type: manhattan_accuracy
value: 85.03308100375514
- type: manhattan_ap
value: 69.67192372362932
- type: manhattan_f1
value: 64.95726495726495
- type: manhattan_precision
value: 61.218771888862946
- type: manhattan_recall
value: 69.1820580474934
- type: max_accuracy
value: 85.04500208618943
- type: max_ap
value: 70.12785837450095
- type: max_f1
value: 65.36573392243496
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.18644002018085
- type: cos_sim_ap
value: 84.09120337117118
- type: cos_sim_f1
value: 76.33478718604302
- type: cos_sim_precision
value: 74.59582598471486
- type: cos_sim_recall
value: 78.15676008623345
- type: dot_accuracy
value: 88.18644002018085
- type: dot_ap
value: 84.09120289232122
- type: dot_f1
value: 76.33478718604302
- type: dot_precision
value: 74.59582598471486
- type: dot_recall
value: 78.15676008623345
- type: euclidean_accuracy
value: 88.18644002018085
- type: euclidean_ap
value: 84.091202102378
- type: euclidean_f1
value: 76.33478718604302
- type: euclidean_precision
value: 74.59582598471486
- type: euclidean_recall
value: 78.15676008623345
- type: manhattan_accuracy
value: 88.19032095315714
- type: manhattan_ap
value: 84.0865561436236
- type: manhattan_f1
value: 76.16665422235496
- type: manhattan_precision
value: 73.93100449340484
- type: manhattan_recall
value: 78.54173082845703
- type: max_accuracy
value: 88.19032095315714
- type: max_ap
value: 84.09120337117118
- type: max_f1
value: 76.33478718604302
---
This is a part of the [MTEB test](https://huggingface.co/spaces/mteb/leaderboard).
```
# !pip install tensorflow_text
import tensorflow_hub as hub
from tensorflow_text import SentencepieceTokenizer
import tensorflow as tf
embedder=hub.load("https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3")
class USE():
def encode(self, sentences, batch_size=32, **kwargs):
embeddings = []
for i in range(0, len(sentences), batch_size):
batch_sentences = sentences[i:i+batch_size]
batch_embeddings = embedder(batch_sentences)
embeddings.extend(batch_embeddings)
return embeddings
model = USE()
``` | [
"SUMMARIZATION"
] | [
"BIOSSES"
] | Non_BioNLP |
BookingCare/multilingual-e5-base-similarity-v1-onnx-quantized | BookingCare | sentence-similarity | [
"sentence-transformers",
"onnx",
"xlm-roberta",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2402.05672",
"arxiv:2108.08787",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,722 | 1,733 | 18 | 0 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: multilingual-e5-base
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.97014925373135
- type: ap
value: 43.69351129103008
- type: f1
value: 73.38075030070492
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.7237687366167
- type: ap
value: 82.22089859962671
- type: f1
value: 69.95532758884401
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.65517241379312
- type: ap
value: 28.507918657094738
- type: f1
value: 66.84516013726119
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.32976445396146
- type: ap
value: 20.720481637566014
- type: f1
value: 59.78002763416003
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.63775
- type: ap
value: 87.22277903861716
- type: f1
value: 90.60378636386807
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.546
- type: f1
value: 44.05666638370923
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.828
- type: f1
value: 41.2710255644252
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.534
- type: f1
value: 39.820743174270326
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 39.684
- type: f1
value: 39.11052682815307
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.436
- type: f1
value: 37.07082931930871
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.226000000000006
- type: f1
value: 36.65372077739185
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.831000000000003
- type: map_at_10
value: 36.42
- type: map_at_100
value: 37.699
- type: map_at_1000
value: 37.724000000000004
- type: map_at_3
value: 32.207
- type: map_at_5
value: 34.312
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 36.574
- type: mrr_at_100
value: 37.854
- type: mrr_at_1000
value: 37.878
- type: mrr_at_3
value: 32.385000000000005
- type: mrr_at_5
value: 34.48
- type: ndcg_at_1
value: 22.831000000000003
- type: ndcg_at_10
value: 44.230000000000004
- type: ndcg_at_100
value: 49.974000000000004
- type: ndcg_at_1000
value: 50.522999999999996
- type: ndcg_at_3
value: 35.363
- type: ndcg_at_5
value: 39.164
- type: precision_at_1
value: 22.831000000000003
- type: precision_at_10
value: 6.935
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.841
- type: precision_at_5
value: 10.754
- type: recall_at_1
value: 22.831000000000003
- type: recall_at_10
value: 69.346
- type: recall_at_100
value: 95.235
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 44.523
- type: recall_at_5
value: 53.769999999999996
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 40.27789869854063
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.41979463347428
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.22752045109304
- type: mrr
value: 71.51112430198303
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.71147646622866
- type: cos_sim_spearman
value: 85.059167046486
- type: euclidean_pearson
value: 75.88421613600647
- type: euclidean_spearman
value: 75.12821787150585
- type: manhattan_pearson
value: 75.22005646957604
- type: manhattan_spearman
value: 74.42880434453272
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.23799582463465
- type: f1
value: 99.12665274878218
- type: precision
value: 99.07098121085595
- type: recall
value: 99.23799582463465
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.88685890380806
- type: f1
value: 97.59336708489249
- type: precision
value: 97.44662117543473
- type: recall
value: 97.88685890380806
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.47142362313821
- type: f1
value: 97.1989377670015
- type: precision
value: 97.06384944001847
- type: recall
value: 97.47142362313821
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.4728804634018
- type: f1
value: 98.2973494821836
- type: precision
value: 98.2095839915745
- type: recall
value: 98.4728804634018
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 82.74025974025975
- type: f1
value: 82.67420447730439
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.0380848063507
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 29.45956405670166
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.122
- type: map_at_10
value: 42.03
- type: map_at_100
value: 43.364000000000004
- type: map_at_1000
value: 43.474000000000004
- type: map_at_3
value: 38.804
- type: map_at_5
value: 40.585
- type: mrr_at_1
value: 39.914
- type: mrr_at_10
value: 48.227
- type: mrr_at_100
value: 49.018
- type: mrr_at_1000
value: 49.064
- type: mrr_at_3
value: 45.994
- type: mrr_at_5
value: 47.396
- type: ndcg_at_1
value: 39.914
- type: ndcg_at_10
value: 47.825
- type: ndcg_at_100
value: 52.852
- type: ndcg_at_1000
value: 54.891
- type: ndcg_at_3
value: 43.517
- type: ndcg_at_5
value: 45.493
- type: precision_at_1
value: 39.914
- type: precision_at_10
value: 8.956
- type: precision_at_100
value: 1.388
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 20.791999999999998
- type: precision_at_5
value: 14.821000000000002
- type: recall_at_1
value: 32.122
- type: recall_at_10
value: 58.294999999999995
- type: recall_at_100
value: 79.726
- type: recall_at_1000
value: 93.099
- type: recall_at_3
value: 45.017
- type: recall_at_5
value: 51.002
- type: map_at_1
value: 29.677999999999997
- type: map_at_10
value: 38.684000000000005
- type: map_at_100
value: 39.812999999999995
- type: map_at_1000
value: 39.945
- type: map_at_3
value: 35.831
- type: map_at_5
value: 37.446
- type: mrr_at_1
value: 37.771
- type: mrr_at_10
value: 44.936
- type: mrr_at_100
value: 45.583
- type: mrr_at_1000
value: 45.634
- type: mrr_at_3
value: 42.771
- type: mrr_at_5
value: 43.994
- type: ndcg_at_1
value: 37.771
- type: ndcg_at_10
value: 44.059
- type: ndcg_at_100
value: 48.192
- type: ndcg_at_1000
value: 50.375
- type: ndcg_at_3
value: 40.172000000000004
- type: ndcg_at_5
value: 41.899
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 8.286999999999999
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.406000000000002
- type: precision_at_5
value: 13.745
- type: recall_at_1
value: 29.677999999999997
- type: recall_at_10
value: 53.071
- type: recall_at_100
value: 70.812
- type: recall_at_1000
value: 84.841
- type: recall_at_3
value: 41.016000000000005
- type: recall_at_5
value: 46.22
- type: map_at_1
value: 42.675000000000004
- type: map_at_10
value: 53.93599999999999
- type: map_at_100
value: 54.806999999999995
- type: map_at_1000
value: 54.867
- type: map_at_3
value: 50.934000000000005
- type: map_at_5
value: 52.583
- type: mrr_at_1
value: 48.339
- type: mrr_at_10
value: 57.265
- type: mrr_at_100
value: 57.873
- type: mrr_at_1000
value: 57.906
- type: mrr_at_3
value: 55.193000000000005
- type: mrr_at_5
value: 56.303000000000004
- type: ndcg_at_1
value: 48.339
- type: ndcg_at_10
value: 59.19799999999999
- type: ndcg_at_100
value: 62.743
- type: ndcg_at_1000
value: 63.99399999999999
- type: ndcg_at_3
value: 54.367
- type: ndcg_at_5
value: 56.548
- type: precision_at_1
value: 48.339
- type: precision_at_10
value: 9.216000000000001
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.72
- type: precision_at_5
value: 16.025
- type: recall_at_1
value: 42.675000000000004
- type: recall_at_10
value: 71.437
- type: recall_at_100
value: 86.803
- type: recall_at_1000
value: 95.581
- type: recall_at_3
value: 58.434
- type: recall_at_5
value: 63.754
- type: map_at_1
value: 23.518
- type: map_at_10
value: 30.648999999999997
- type: map_at_100
value: 31.508999999999997
- type: map_at_1000
value: 31.604
- type: map_at_3
value: 28.247
- type: map_at_5
value: 29.65
- type: mrr_at_1
value: 25.650000000000002
- type: mrr_at_10
value: 32.771
- type: mrr_at_100
value: 33.554
- type: mrr_at_1000
value: 33.629999999999995
- type: mrr_at_3
value: 30.433
- type: mrr_at_5
value: 31.812
- type: ndcg_at_1
value: 25.650000000000002
- type: ndcg_at_10
value: 34.929
- type: ndcg_at_100
value: 39.382
- type: ndcg_at_1000
value: 41.913
- type: ndcg_at_3
value: 30.292
- type: ndcg_at_5
value: 32.629999999999995
- type: precision_at_1
value: 25.650000000000002
- type: precision_at_10
value: 5.311
- type: precision_at_100
value: 0.792
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 12.58
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 23.518
- type: recall_at_10
value: 46.19
- type: recall_at_100
value: 67.123
- type: recall_at_1000
value: 86.442
- type: recall_at_3
value: 33.678000000000004
- type: recall_at_5
value: 39.244
- type: map_at_1
value: 15.891
- type: map_at_10
value: 22.464000000000002
- type: map_at_100
value: 23.483
- type: map_at_1000
value: 23.613
- type: map_at_3
value: 20.080000000000002
- type: map_at_5
value: 21.526
- type: mrr_at_1
value: 20.025000000000002
- type: mrr_at_10
value: 26.712999999999997
- type: mrr_at_100
value: 27.650000000000002
- type: mrr_at_1000
value: 27.737000000000002
- type: mrr_at_3
value: 24.274
- type: mrr_at_5
value: 25.711000000000002
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 27.028999999999996
- type: ndcg_at_100
value: 32.064
- type: ndcg_at_1000
value: 35.188
- type: ndcg_at_3
value: 22.512999999999998
- type: ndcg_at_5
value: 24.89
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 4.776
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 10.531
- type: precision_at_5
value: 7.811
- type: recall_at_1
value: 15.891
- type: recall_at_10
value: 37.261
- type: recall_at_100
value: 59.12
- type: recall_at_1000
value: 81.356
- type: recall_at_3
value: 24.741
- type: recall_at_5
value: 30.753999999999998
- type: map_at_1
value: 27.544
- type: map_at_10
value: 36.283
- type: map_at_100
value: 37.467
- type: map_at_1000
value: 37.574000000000005
- type: map_at_3
value: 33.528999999999996
- type: map_at_5
value: 35.028999999999996
- type: mrr_at_1
value: 34.166999999999994
- type: mrr_at_10
value: 41.866
- type: mrr_at_100
value: 42.666
- type: mrr_at_1000
value: 42.716
- type: mrr_at_3
value: 39.541
- type: mrr_at_5
value: 40.768
- type: ndcg_at_1
value: 34.166999999999994
- type: ndcg_at_10
value: 41.577
- type: ndcg_at_100
value: 46.687
- type: ndcg_at_1000
value: 48.967
- type: ndcg_at_3
value: 37.177
- type: ndcg_at_5
value: 39.097
- type: precision_at_1
value: 34.166999999999994
- type: precision_at_10
value: 7.420999999999999
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 17.291999999999998
- type: precision_at_5
value: 12.166
- type: recall_at_1
value: 27.544
- type: recall_at_10
value: 51.99399999999999
- type: recall_at_100
value: 73.738
- type: recall_at_1000
value: 89.33
- type: recall_at_3
value: 39.179
- type: recall_at_5
value: 44.385999999999996
- type: map_at_1
value: 26.661
- type: map_at_10
value: 35.475
- type: map_at_100
value: 36.626999999999995
- type: map_at_1000
value: 36.741
- type: map_at_3
value: 32.818000000000005
- type: map_at_5
value: 34.397
- type: mrr_at_1
value: 32.647999999999996
- type: mrr_at_10
value: 40.784
- type: mrr_at_100
value: 41.602
- type: mrr_at_1000
value: 41.661
- type: mrr_at_3
value: 38.68
- type: mrr_at_5
value: 39.838
- type: ndcg_at_1
value: 32.647999999999996
- type: ndcg_at_10
value: 40.697
- type: ndcg_at_100
value: 45.799
- type: ndcg_at_1000
value: 48.235
- type: ndcg_at_3
value: 36.516
- type: ndcg_at_5
value: 38.515
- type: precision_at_1
value: 32.647999999999996
- type: precision_at_10
value: 7.202999999999999
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 17.314
- type: precision_at_5
value: 12.145999999999999
- type: recall_at_1
value: 26.661
- type: recall_at_10
value: 50.995000000000005
- type: recall_at_100
value: 73.065
- type: recall_at_1000
value: 89.781
- type: recall_at_3
value: 39.073
- type: recall_at_5
value: 44.395
- type: map_at_1
value: 25.946583333333333
- type: map_at_10
value: 33.79725
- type: map_at_100
value: 34.86408333333333
- type: map_at_1000
value: 34.9795
- type: map_at_3
value: 31.259999999999998
- type: map_at_5
value: 32.71541666666666
- type: mrr_at_1
value: 30.863749999999996
- type: mrr_at_10
value: 37.99183333333333
- type: mrr_at_100
value: 38.790499999999994
- type: mrr_at_1000
value: 38.85575000000001
- type: mrr_at_3
value: 35.82083333333333
- type: mrr_at_5
value: 37.07533333333333
- type: ndcg_at_1
value: 30.863749999999996
- type: ndcg_at_10
value: 38.52141666666667
- type: ndcg_at_100
value: 43.17966666666667
- type: ndcg_at_1000
value: 45.64608333333333
- type: ndcg_at_3
value: 34.333000000000006
- type: ndcg_at_5
value: 36.34975
- type: precision_at_1
value: 30.863749999999996
- type: precision_at_10
value: 6.598999999999999
- type: precision_at_100
value: 1.0502500000000001
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 15.557583333333334
- type: precision_at_5
value: 11.020000000000001
- type: recall_at_1
value: 25.946583333333333
- type: recall_at_10
value: 48.36991666666666
- type: recall_at_100
value: 69.02408333333334
- type: recall_at_1000
value: 86.43858333333331
- type: recall_at_3
value: 36.4965
- type: recall_at_5
value: 41.76258333333334
- type: map_at_1
value: 22.431
- type: map_at_10
value: 28.889
- type: map_at_100
value: 29.642000000000003
- type: map_at_1000
value: 29.742
- type: map_at_3
value: 26.998
- type: map_at_5
value: 28.172000000000004
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 31.763
- type: mrr_at_100
value: 32.443
- type: mrr_at_1000
value: 32.531
- type: mrr_at_3
value: 29.959000000000003
- type: mrr_at_5
value: 31.063000000000002
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 32.586999999999996
- type: ndcg_at_100
value: 36.5
- type: ndcg_at_1000
value: 39.133
- type: ndcg_at_3
value: 29.25
- type: ndcg_at_5
value: 31.023
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 4.954
- type: precision_at_100
value: 0.747
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.741999999999999
- type: recall_at_1
value: 22.431
- type: recall_at_10
value: 41.134
- type: recall_at_100
value: 59.28600000000001
- type: recall_at_1000
value: 78.857
- type: recall_at_3
value: 31.926
- type: recall_at_5
value: 36.335
- type: map_at_1
value: 17.586
- type: map_at_10
value: 23.304
- type: map_at_100
value: 24.159
- type: map_at_1000
value: 24.281
- type: map_at_3
value: 21.316
- type: map_at_5
value: 22.383
- type: mrr_at_1
value: 21.645
- type: mrr_at_10
value: 27.365000000000002
- type: mrr_at_100
value: 28.108
- type: mrr_at_1000
value: 28.192
- type: mrr_at_3
value: 25.482
- type: mrr_at_5
value: 26.479999999999997
- type: ndcg_at_1
value: 21.645
- type: ndcg_at_10
value: 27.306
- type: ndcg_at_100
value: 31.496000000000002
- type: ndcg_at_1000
value: 34.53
- type: ndcg_at_3
value: 23.73
- type: ndcg_at_5
value: 25.294
- type: precision_at_1
value: 21.645
- type: precision_at_10
value: 4.797
- type: precision_at_100
value: 0.8059999999999999
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 10.850999999999999
- type: precision_at_5
value: 7.736
- type: recall_at_1
value: 17.586
- type: recall_at_10
value: 35.481
- type: recall_at_100
value: 54.534000000000006
- type: recall_at_1000
value: 76.456
- type: recall_at_3
value: 25.335
- type: recall_at_5
value: 29.473
- type: map_at_1
value: 25.095
- type: map_at_10
value: 32.374
- type: map_at_100
value: 33.537
- type: map_at_1000
value: 33.634
- type: map_at_3
value: 30.089
- type: map_at_5
value: 31.433
- type: mrr_at_1
value: 29.198
- type: mrr_at_10
value: 36.01
- type: mrr_at_100
value: 37.022
- type: mrr_at_1000
value: 37.083
- type: mrr_at_3
value: 33.94
- type: mrr_at_5
value: 35.148
- type: ndcg_at_1
value: 29.198
- type: ndcg_at_10
value: 36.729
- type: ndcg_at_100
value: 42.114000000000004
- type: ndcg_at_1000
value: 44.592
- type: ndcg_at_3
value: 32.644
- type: ndcg_at_5
value: 34.652
- type: precision_at_1
value: 29.198
- type: precision_at_10
value: 5.970000000000001
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 14.396999999999998
- type: precision_at_5
value: 10.093
- type: recall_at_1
value: 25.095
- type: recall_at_10
value: 46.392
- type: recall_at_100
value: 69.706
- type: recall_at_1000
value: 87.738
- type: recall_at_3
value: 35.303000000000004
- type: recall_at_5
value: 40.441
- type: map_at_1
value: 26.857999999999997
- type: map_at_10
value: 34.066
- type: map_at_100
value: 35.671
- type: map_at_1000
value: 35.881
- type: map_at_3
value: 31.304
- type: map_at_5
value: 32.885
- type: mrr_at_1
value: 32.411
- type: mrr_at_10
value: 38.987
- type: mrr_at_100
value: 39.894
- type: mrr_at_1000
value: 39.959
- type: mrr_at_3
value: 36.626999999999995
- type: mrr_at_5
value: 38.011
- type: ndcg_at_1
value: 32.411
- type: ndcg_at_10
value: 39.208
- type: ndcg_at_100
value: 44.626
- type: ndcg_at_1000
value: 47.43
- type: ndcg_at_3
value: 35.091
- type: ndcg_at_5
value: 37.119
- type: precision_at_1
value: 32.411
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 16.14
- type: precision_at_5
value: 11.976
- type: recall_at_1
value: 26.857999999999997
- type: recall_at_10
value: 47.407
- type: recall_at_100
value: 72.236
- type: recall_at_1000
value: 90.77
- type: recall_at_3
value: 35.125
- type: recall_at_5
value: 40.522999999999996
- type: map_at_1
value: 21.3
- type: map_at_10
value: 27.412999999999997
- type: map_at_100
value: 28.29
- type: map_at_1000
value: 28.398
- type: map_at_3
value: 25.169999999999998
- type: map_at_5
value: 26.496
- type: mrr_at_1
value: 23.29
- type: mrr_at_10
value: 29.215000000000003
- type: mrr_at_100
value: 30.073
- type: mrr_at_1000
value: 30.156
- type: mrr_at_3
value: 26.956000000000003
- type: mrr_at_5
value: 28.38
- type: ndcg_at_1
value: 23.29
- type: ndcg_at_10
value: 31.113000000000003
- type: ndcg_at_100
value: 35.701
- type: ndcg_at_1000
value: 38.505
- type: ndcg_at_3
value: 26.727
- type: ndcg_at_5
value: 29.037000000000003
- type: precision_at_1
value: 23.29
- type: precision_at_10
value: 4.787
- type: precision_at_100
value: 0.763
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 11.091
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 21.3
- type: recall_at_10
value: 40.782000000000004
- type: recall_at_100
value: 62.13999999999999
- type: recall_at_1000
value: 83.012
- type: recall_at_3
value: 29.131
- type: recall_at_5
value: 34.624
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.631
- type: map_at_10
value: 16.634999999999998
- type: map_at_100
value: 18.23
- type: map_at_1000
value: 18.419
- type: map_at_3
value: 13.66
- type: map_at_5
value: 15.173
- type: mrr_at_1
value: 21.368000000000002
- type: mrr_at_10
value: 31.56
- type: mrr_at_100
value: 32.58
- type: mrr_at_1000
value: 32.633
- type: mrr_at_3
value: 28.241
- type: mrr_at_5
value: 30.225
- type: ndcg_at_1
value: 21.368000000000002
- type: ndcg_at_10
value: 23.855999999999998
- type: ndcg_at_100
value: 30.686999999999998
- type: ndcg_at_1000
value: 34.327000000000005
- type: ndcg_at_3
value: 18.781
- type: ndcg_at_5
value: 20.73
- type: precision_at_1
value: 21.368000000000002
- type: precision_at_10
value: 7.564
- type: precision_at_100
value: 1.496
- type: precision_at_1000
value: 0.217
- type: precision_at_3
value: 13.876
- type: precision_at_5
value: 11.062
- type: recall_at_1
value: 9.631
- type: recall_at_10
value: 29.517
- type: recall_at_100
value: 53.452
- type: recall_at_1000
value: 74.115
- type: recall_at_3
value: 17.605999999999998
- type: recall_at_5
value: 22.505
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.885
- type: map_at_10
value: 18.798000000000002
- type: map_at_100
value: 26.316
- type: map_at_1000
value: 27.869
- type: map_at_3
value: 13.719000000000001
- type: map_at_5
value: 15.716
- type: mrr_at_1
value: 66
- type: mrr_at_10
value: 74.263
- type: mrr_at_100
value: 74.519
- type: mrr_at_1000
value: 74.531
- type: mrr_at_3
value: 72.458
- type: mrr_at_5
value: 73.321
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.355999999999995
- type: ndcg_at_100
value: 44.366
- type: ndcg_at_1000
value: 51.771
- type: ndcg_at_3
value: 45.195
- type: ndcg_at_5
value: 42.187000000000005
- type: precision_at_1
value: 66
- type: precision_at_10
value: 31.75
- type: precision_at_100
value: 10.11
- type: precision_at_1000
value: 1.9800000000000002
- type: precision_at_3
value: 48.167
- type: precision_at_5
value: 40.050000000000004
- type: recall_at_1
value: 8.885
- type: recall_at_10
value: 24.471999999999998
- type: recall_at_100
value: 49.669000000000004
- type: recall_at_1000
value: 73.383
- type: recall_at_3
value: 14.872
- type: recall_at_5
value: 18.262999999999998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.18
- type: f1
value: 40.26878691789978
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 62.751999999999995
- type: map_at_10
value: 74.131
- type: map_at_100
value: 74.407
- type: map_at_1000
value: 74.423
- type: map_at_3
value: 72.329
- type: map_at_5
value: 73.555
- type: mrr_at_1
value: 67.282
- type: mrr_at_10
value: 78.292
- type: mrr_at_100
value: 78.455
- type: mrr_at_1000
value: 78.458
- type: mrr_at_3
value: 76.755
- type: mrr_at_5
value: 77.839
- type: ndcg_at_1
value: 67.282
- type: ndcg_at_10
value: 79.443
- type: ndcg_at_100
value: 80.529
- type: ndcg_at_1000
value: 80.812
- type: ndcg_at_3
value: 76.281
- type: ndcg_at_5
value: 78.235
- type: precision_at_1
value: 67.282
- type: precision_at_10
value: 10.078
- type: precision_at_100
value: 1.082
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 30.178
- type: precision_at_5
value: 19.232
- type: recall_at_1
value: 62.751999999999995
- type: recall_at_10
value: 91.521
- type: recall_at_100
value: 95.997
- type: recall_at_1000
value: 97.775
- type: recall_at_3
value: 83.131
- type: recall_at_5
value: 87.93299999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.861
- type: map_at_10
value: 30.252000000000002
- type: map_at_100
value: 32.082
- type: map_at_1000
value: 32.261
- type: map_at_3
value: 25.909
- type: map_at_5
value: 28.296
- type: mrr_at_1
value: 37.346000000000004
- type: mrr_at_10
value: 45.802
- type: mrr_at_100
value: 46.611999999999995
- type: mrr_at_1000
value: 46.659
- type: mrr_at_3
value: 43.056
- type: mrr_at_5
value: 44.637
- type: ndcg_at_1
value: 37.346000000000004
- type: ndcg_at_10
value: 38.169
- type: ndcg_at_100
value: 44.864
- type: ndcg_at_1000
value: 47.974
- type: ndcg_at_3
value: 33.619
- type: ndcg_at_5
value: 35.317
- type: precision_at_1
value: 37.346000000000004
- type: precision_at_10
value: 10.693999999999999
- type: precision_at_100
value: 1.775
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 22.325
- type: precision_at_5
value: 16.852
- type: recall_at_1
value: 18.861
- type: recall_at_10
value: 45.672000000000004
- type: recall_at_100
value: 70.60499999999999
- type: recall_at_1000
value: 89.216
- type: recall_at_3
value: 30.361
- type: recall_at_5
value: 36.998999999999995
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.852999999999994
- type: map_at_10
value: 59.961
- type: map_at_100
value: 60.78
- type: map_at_1000
value: 60.843
- type: map_at_3
value: 56.39999999999999
- type: map_at_5
value: 58.646
- type: mrr_at_1
value: 75.70599999999999
- type: mrr_at_10
value: 82.321
- type: mrr_at_100
value: 82.516
- type: mrr_at_1000
value: 82.525
- type: mrr_at_3
value: 81.317
- type: mrr_at_5
value: 81.922
- type: ndcg_at_1
value: 75.70599999999999
- type: ndcg_at_10
value: 68.557
- type: ndcg_at_100
value: 71.485
- type: ndcg_at_1000
value: 72.71600000000001
- type: ndcg_at_3
value: 63.524
- type: ndcg_at_5
value: 66.338
- type: precision_at_1
value: 75.70599999999999
- type: precision_at_10
value: 14.463000000000001
- type: precision_at_100
value: 1.677
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 40.806
- type: precision_at_5
value: 26.709
- type: recall_at_1
value: 37.852999999999994
- type: recall_at_10
value: 72.316
- type: recall_at_100
value: 83.842
- type: recall_at_1000
value: 91.999
- type: recall_at_3
value: 61.209
- type: recall_at_5
value: 66.77199999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.46039999999999
- type: ap
value: 79.9812521351881
- type: f1
value: 85.31722909702084
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.704
- type: map_at_10
value: 35.329
- type: map_at_100
value: 36.494
- type: map_at_1000
value: 36.541000000000004
- type: map_at_3
value: 31.476
- type: map_at_5
value: 33.731
- type: mrr_at_1
value: 23.294999999999998
- type: mrr_at_10
value: 35.859
- type: mrr_at_100
value: 36.968
- type: mrr_at_1000
value: 37.008
- type: mrr_at_3
value: 32.085
- type: mrr_at_5
value: 34.299
- type: ndcg_at_1
value: 23.324
- type: ndcg_at_10
value: 42.274
- type: ndcg_at_100
value: 47.839999999999996
- type: ndcg_at_1000
value: 48.971
- type: ndcg_at_3
value: 34.454
- type: ndcg_at_5
value: 38.464
- type: precision_at_1
value: 23.324
- type: precision_at_10
value: 6.648
- type: precision_at_100
value: 0.9440000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.674999999999999
- type: precision_at_5
value: 10.850999999999999
- type: recall_at_1
value: 22.704
- type: recall_at_10
value: 63.660000000000004
- type: recall_at_100
value: 89.29899999999999
- type: recall_at_1000
value: 97.88900000000001
- type: recall_at_3
value: 42.441
- type: recall_at_5
value: 52.04
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.1326949384405
- type: f1
value: 92.89743579612082
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.62524654832347
- type: f1
value: 88.65106082263151
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.59039359573046
- type: f1
value: 90.31532892105662
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.21046038208581
- type: f1
value: 86.41459529813113
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.3180351380423
- type: f1
value: 86.71383078226444
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.24231464737792
- type: f1
value: 86.31845567592403
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.27131782945736
- type: f1
value: 57.52079940417103
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.2341504649197
- type: f1
value: 51.349951558039244
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.27418278852569
- type: f1
value: 50.1714985749095
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.68243031631694
- type: f1
value: 50.1066160836192
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.2362854069559
- type: f1
value: 48.821279948766424
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.71428571428571
- type: f1
value: 53.94611389496195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.97646267652992
- type: f1
value: 57.26797883561521
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.65501008742435
- type: f1
value: 50.416258382177034
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.45796906523201
- type: f1
value: 53.306690547422185
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.59246805648957
- type: f1
value: 59.818381969051494
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.126429051782104
- type: f1
value: 58.25993593933026
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.057162071284466
- type: f1
value: 46.96095728790911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.64425016812375
- type: f1
value: 62.858291698755764
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.08944182918628
- type: f1
value: 62.44639030604241
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.68056489576328
- type: f1
value: 61.775326758789504
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.11163416274377
- type: f1
value: 69.70789096927015
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.40282447881641
- type: f1
value: 66.38492065671895
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.24613315400134
- type: f1
value: 64.3348019501336
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.78345662407531
- type: f1
value: 62.21279452354622
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.9455279085407
- type: f1
value: 65.48193124964094
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.05110961667788
- type: f1
value: 58.097856564684534
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.95292535305985
- type: f1
value: 62.09182174767901
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.97310020174848
- type: f1
value: 61.14252567730396
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.08069939475453
- type: f1
value: 57.044041742492034
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.63752521856085
- type: f1
value: 63.889340907205316
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.385339609952936
- type: f1
value: 53.449033750088304
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.93073301950234
- type: f1
value: 65.9884357824104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.94418291862812
- type: f1
value: 66.48740222583132
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.26025554808339
- type: f1
value: 50.19562815100793
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.98789509078682
- type: f1
value: 46.65788438676836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.68728984532616
- type: f1
value: 41.642419349541996
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.19300605245461
- type: f1
value: 55.8626492442437
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33826496301278
- type: f1
value: 63.89499791648792
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.33960995292536
- type: f1
value: 57.15242464180892
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.09347679892402
- type: f1
value: 59.64733214063841
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.75924680564896
- type: f1
value: 55.96585692366827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.48486886348352
- type: f1
value: 59.45143559032946
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.56422326832549
- type: f1
value: 54.96368702901926
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.18022864828512
- type: f1
value: 63.05369805040634
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.30329522528581
- type: f1
value: 64.06084612020727
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.36919973100201
- type: f1
value: 65.12154124788887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.98117014122394
- type: f1
value: 66.41847559806962
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.53799596503026
- type: f1
value: 62.17067330740817
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.01815736381977
- type: f1
value: 66.24988369607843
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.34700739744452
- type: f1
value: 59.957933424941636
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.23402824478815
- type: f1
value: 57.98836976018471
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.43849680666855
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.998655010087425
- type: f1
value: 52.83737515406804
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.71217215870882
- type: f1
value: 55.051794977833026
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.724277067921996
- type: f1
value: 56.33485571838306
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.59515803631473
- type: f1
value: 64.96772366193588
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.860793544048406
- type: f1
value: 58.148845819115394
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.40753194351043
- type: f1
value: 63.18903778054698
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.52320107599194
- type: f1
value: 58.356144563398516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.17014122394083
- type: f1
value: 63.919964062638925
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.15601882985878
- type: f1
value: 67.01451905761371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.65030262273034
- type: f1
value: 64.14420425129063
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.08742434431743
- type: f1
value: 63.044060042311756
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.52387357094821
- type: f1
value: 56.82398588814534
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.239408204438476
- type: f1
value: 61.92570286170469
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.74915938130463
- type: f1
value: 62.130740689396276
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.00336247478144
- type: f1
value: 63.71080635228055
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.837928715534645
- type: f1
value: 50.390741680320836
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.42098184263618
- type: f1
value: 71.41355113538995
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.95359784801613
- type: f1
value: 71.42699340156742
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.18157363819772
- type: f1
value: 69.74836113037671
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 76.78000685068261
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.5030262273033
- type: f1
value: 71.71620130425673
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.24546065904505
- type: f1
value: 69.07638311730359
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.12911903160726
- type: f1
value: 68.32651736539815
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195025
- type: f1
value: 71.33986549860187
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.44451916610626
- type: f1
value: 66.90192664503866
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.16274377942166
- type: f1
value: 68.01090953775066
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.75319435104237
- type: f1
value: 70.18035309201403
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.14391392064559
- type: f1
value: 61.48286540778145
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.70275722932078
- type: f1
value: 70.26164779846495
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.93813046402153
- type: f1
value: 58.8852862116525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.320107599193
- type: f1
value: 72.19836409602924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.65366509751176
- type: f1
value: 74.55188288799579
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.694014794889036
- type: f1
value: 58.11353311721067
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.37457969065231
- type: f1
value: 52.81306134311697
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.3086751849361
- type: f1
value: 45.396449765419376
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.151983860121064
- type: f1
value: 60.31762544281696
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.44788164088769
- type: f1
value: 71.68150151736367
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.81439139206455
- type: f1
value: 62.06735559105593
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.04303967720242
- type: f1
value: 66.68298851670133
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.43913920645595
- type: f1
value: 60.25605977560783
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.90316072629456
- type: f1
value: 65.1325924692381
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.63752521856086
- type: f1
value: 59.14284778039585
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.63080026899797
- type: f1
value: 70.89771864626877
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.10827168796234
- type: f1
value: 71.71954219691159
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.59515803631471
- type: f1
value: 70.05040128099003
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.83389374579691
- type: f1
value: 70.84877936562735
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.18628110289173
- type: f1
value: 68.97232927921841
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.99260255548083
- type: f1
value: 72.85139492157732
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.26227303295225
- type: f1
value: 65.08833655469431
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48621385339611
- type: f1
value: 64.43483199071298
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.14391392064559
- type: f1
value: 72.2580822579741
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.88567585743107
- type: f1
value: 58.3073765932569
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.38399462004034
- type: f1
value: 60.82139544252606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.58574310692671
- type: f1
value: 60.71443370385374
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.61398789509079
- type: f1
value: 70.99761812049401
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.73705447209146
- type: f1
value: 61.680849331794796
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.66778749159381
- type: f1
value: 71.17320646080115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.640215198386
- type: f1
value: 63.301805157015444
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.00672494956288
- type: f1
value: 70.26005548582106
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.42030934767989
- type: f1
value: 75.2074842882598
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.69266980497646
- type: f1
value: 70.94103167391192
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 28.91697191169135
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.434000079573313
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.96683513343383
- type: mrr
value: 31.967364078714834
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.5280000000000005
- type: map_at_10
value: 11.793
- type: map_at_100
value: 14.496999999999998
- type: map_at_1000
value: 15.783
- type: map_at_3
value: 8.838
- type: map_at_5
value: 10.07
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 51.531000000000006
- type: mrr_at_100
value: 52.205
- type: mrr_at_1000
value: 52.242999999999995
- type: mrr_at_3
value: 49.431999999999995
- type: mrr_at_5
value: 50.470000000000006
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 32.464999999999996
- type: ndcg_at_100
value: 28.927999999999997
- type: ndcg_at_1000
value: 37.629000000000005
- type: ndcg_at_3
value: 37.845
- type: ndcg_at_5
value: 35.147
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 23.932000000000002
- type: precision_at_100
value: 7.17
- type: precision_at_1000
value: 1.967
- type: precision_at_3
value: 35.397
- type: precision_at_5
value: 29.907
- type: recall_at_1
value: 5.5280000000000005
- type: recall_at_10
value: 15.568000000000001
- type: recall_at_100
value: 28.54
- type: recall_at_1000
value: 59.864
- type: recall_at_3
value: 9.822000000000001
- type: recall_at_5
value: 11.726
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.041000000000004
- type: map_at_10
value: 52.664
- type: map_at_100
value: 53.477
- type: map_at_1000
value: 53.505
- type: map_at_3
value: 48.510999999999996
- type: map_at_5
value: 51.036
- type: mrr_at_1
value: 41.338
- type: mrr_at_10
value: 55.071000000000005
- type: mrr_at_100
value: 55.672
- type: mrr_at_1000
value: 55.689
- type: mrr_at_3
value: 51.82
- type: mrr_at_5
value: 53.852
- type: ndcg_at_1
value: 41.338
- type: ndcg_at_10
value: 60.01800000000001
- type: ndcg_at_100
value: 63.409000000000006
- type: ndcg_at_1000
value: 64.017
- type: ndcg_at_3
value: 52.44799999999999
- type: ndcg_at_5
value: 56.571000000000005
- type: precision_at_1
value: 41.338
- type: precision_at_10
value: 9.531
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.416
- type: precision_at_5
value: 16.46
- type: recall_at_1
value: 37.041000000000004
- type: recall_at_10
value: 79.76299999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.851
- type: recall_at_3
value: 60.465
- type: recall_at_5
value: 69.906
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.952
- type: map_at_10
value: 83.758
- type: map_at_100
value: 84.406
- type: map_at_1000
value: 84.425
- type: map_at_3
value: 80.839
- type: map_at_5
value: 82.646
- type: mrr_at_1
value: 80.62
- type: mrr_at_10
value: 86.947
- type: mrr_at_100
value: 87.063
- type: mrr_at_1000
value: 87.064
- type: mrr_at_3
value: 85.96000000000001
- type: mrr_at_5
value: 86.619
- type: ndcg_at_1
value: 80.63
- type: ndcg_at_10
value: 87.64800000000001
- type: ndcg_at_100
value: 88.929
- type: ndcg_at_1000
value: 89.054
- type: ndcg_at_3
value: 84.765
- type: ndcg_at_5
value: 86.291
- type: precision_at_1
value: 80.63
- type: precision_at_10
value: 13.314
- type: precision_at_100
value: 1.525
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.1
- type: precision_at_5
value: 24.372
- type: recall_at_1
value: 69.952
- type: recall_at_10
value: 94.955
- type: recall_at_100
value: 99.38
- type: recall_at_1000
value: 99.96000000000001
- type: recall_at_3
value: 86.60600000000001
- type: recall_at_5
value: 90.997
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.41329517878427
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.171278362748666
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.213
- type: map_at_10
value: 9.895
- type: map_at_100
value: 11.776
- type: map_at_1000
value: 12.084
- type: map_at_3
value: 7.2669999999999995
- type: map_at_5
value: 8.620999999999999
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 31.112000000000002
- type: mrr_at_100
value: 32.274
- type: mrr_at_1000
value: 32.35
- type: mrr_at_3
value: 28.133000000000003
- type: mrr_at_5
value: 29.892999999999997
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.163999999999998
- type: ndcg_at_100
value: 24.738
- type: ndcg_at_1000
value: 30.316
- type: ndcg_at_3
value: 16.665
- type: ndcg_at_5
value: 14.478
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 8.74
- type: precision_at_100
value: 1.963
- type: precision_at_1000
value: 0.33
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 4.213
- type: recall_at_10
value: 17.698
- type: recall_at_100
value: 39.838
- type: recall_at_1000
value: 66.893
- type: recall_at_3
value: 9.418
- type: recall_at_5
value: 12.773000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.90453315738294
- type: cos_sim_spearman
value: 78.51197850080254
- type: euclidean_pearson
value: 80.09647123597748
- type: euclidean_spearman
value: 78.63548011514061
- type: manhattan_pearson
value: 80.10645285675231
- type: manhattan_spearman
value: 78.57861806068901
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.2616156846401
- type: cos_sim_spearman
value: 76.69713867850156
- type: euclidean_pearson
value: 77.97948563800394
- type: euclidean_spearman
value: 74.2371211567807
- type: manhattan_pearson
value: 77.69697879669705
- type: manhattan_spearman
value: 73.86529778022278
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.0293269315045
- type: cos_sim_spearman
value: 78.02555120584198
- type: euclidean_pearson
value: 78.25398100379078
- type: euclidean_spearman
value: 78.66963870599464
- type: manhattan_pearson
value: 78.14314682167348
- type: manhattan_spearman
value: 78.57692322969135
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.16989925136942
- type: cos_sim_spearman
value: 76.5996225327091
- type: euclidean_pearson
value: 77.8319003279786
- type: euclidean_spearman
value: 76.42824009468998
- type: manhattan_pearson
value: 77.69118862737736
- type: manhattan_spearman
value: 76.25568104762812
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.42012286935325
- type: cos_sim_spearman
value: 88.15654297884122
- type: euclidean_pearson
value: 87.34082819427852
- type: euclidean_spearman
value: 88.06333589547084
- type: manhattan_pearson
value: 87.25115596784842
- type: manhattan_spearman
value: 87.9559927695203
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.88222044996712
- type: cos_sim_spearman
value: 84.28476589061077
- type: euclidean_pearson
value: 83.17399758058309
- type: euclidean_spearman
value: 83.85497357244542
- type: manhattan_pearson
value: 83.0308397703786
- type: manhattan_spearman
value: 83.71554539935046
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.20682986257339
- type: cos_sim_spearman
value: 79.94567120362092
- type: euclidean_pearson
value: 79.43122480368902
- type: euclidean_spearman
value: 79.94802077264987
- type: manhattan_pearson
value: 79.32653021527081
- type: manhattan_spearman
value: 79.80961146709178
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.46578144394383
- type: cos_sim_spearman
value: 74.52496637472179
- type: euclidean_pearson
value: 72.2903807076809
- type: euclidean_spearman
value: 73.55549359771645
- type: manhattan_pearson
value: 72.09324837709393
- type: manhattan_spearman
value: 73.36743103606581
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 71.37272335116
- type: cos_sim_spearman
value: 71.26702117766037
- type: euclidean_pearson
value: 67.114829954434
- type: euclidean_spearman
value: 66.37938893947761
- type: manhattan_pearson
value: 66.79688574095246
- type: manhattan_spearman
value: 66.17292828079667
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.61016770129092
- type: cos_sim_spearman
value: 82.08515426632214
- type: euclidean_pearson
value: 80.557340361131
- type: euclidean_spearman
value: 80.37585812266175
- type: manhattan_pearson
value: 80.6782873404285
- type: manhattan_spearman
value: 80.6678073032024
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.00150745350108
- type: cos_sim_spearman
value: 87.83441972211425
- type: euclidean_pearson
value: 87.94826702308792
- type: euclidean_spearman
value: 87.46143974860725
- type: manhattan_pearson
value: 87.97560344306105
- type: manhattan_spearman
value: 87.5267102829796
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 64.76325252267235
- type: cos_sim_spearman
value: 63.32615095463905
- type: euclidean_pearson
value: 64.07920669155716
- type: euclidean_spearman
value: 61.21409893072176
- type: manhattan_pearson
value: 64.26308625680016
- type: manhattan_spearman
value: 61.2438185254079
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.82644463022595
- type: cos_sim_spearman
value: 76.50381269945073
- type: euclidean_pearson
value: 75.1328548315934
- type: euclidean_spearman
value: 75.63761139408453
- type: manhattan_pearson
value: 75.18610101241407
- type: manhattan_spearman
value: 75.30669266354164
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.49994164686832
- type: cos_sim_spearman
value: 86.73743986245549
- type: euclidean_pearson
value: 86.8272894387145
- type: euclidean_spearman
value: 85.97608491000507
- type: manhattan_pearson
value: 86.74960140396779
- type: manhattan_spearman
value: 85.79285984190273
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.58172210788469
- type: cos_sim_spearman
value: 80.17516468334607
- type: euclidean_pearson
value: 77.56537843470504
- type: euclidean_spearman
value: 77.57264627395521
- type: manhattan_pearson
value: 78.09703521695943
- type: manhattan_spearman
value: 78.15942760916954
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.7589932931751
- type: cos_sim_spearman
value: 80.15210089028162
- type: euclidean_pearson
value: 77.54135223516057
- type: euclidean_spearman
value: 77.52697996368764
- type: manhattan_pearson
value: 77.65734439572518
- type: manhattan_spearman
value: 77.77702992016121
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.16682365511267
- type: cos_sim_spearman
value: 79.25311267628506
- type: euclidean_pearson
value: 77.54882036762244
- type: euclidean_spearman
value: 77.33212935194827
- type: manhattan_pearson
value: 77.98405516064015
- type: manhattan_spearman
value: 77.85075717865719
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.10473294775917
- type: cos_sim_spearman
value: 61.82780474476838
- type: euclidean_pearson
value: 45.885111672377256
- type: euclidean_spearman
value: 56.88306351932454
- type: manhattan_pearson
value: 46.101218127323186
- type: manhattan_spearman
value: 56.80953694186333
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 45.781923079584146
- type: cos_sim_spearman
value: 55.95098449691107
- type: euclidean_pearson
value: 25.4571031323205
- type: euclidean_spearman
value: 49.859978118078935
- type: manhattan_pearson
value: 25.624938455041384
- type: manhattan_spearman
value: 49.99546185049401
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.00618133997907
- type: cos_sim_spearman
value: 66.57896677718321
- type: euclidean_pearson
value: 42.60118466388821
- type: euclidean_spearman
value: 62.8210759715209
- type: manhattan_pearson
value: 42.63446860604094
- type: manhattan_spearman
value: 62.73803068925271
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 28.460759121626943
- type: cos_sim_spearman
value: 34.13459007469131
- type: euclidean_pearson
value: 6.0917739325525195
- type: euclidean_spearman
value: 27.9947262664867
- type: manhattan_pearson
value: 6.16877864169911
- type: manhattan_spearman
value: 28.00664163971514
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.42546621771696
- type: cos_sim_spearman
value: 63.699663168970474
- type: euclidean_pearson
value: 38.12085278789738
- type: euclidean_spearman
value: 58.12329140741536
- type: manhattan_pearson
value: 37.97364549443335
- type: manhattan_spearman
value: 57.81545502318733
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 46.82241380954213
- type: cos_sim_spearman
value: 57.86569456006391
- type: euclidean_pearson
value: 31.80480070178813
- type: euclidean_spearman
value: 52.484000620130104
- type: manhattan_pearson
value: 31.952708554646097
- type: manhattan_spearman
value: 52.8560972356195
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 52.00447170498087
- type: cos_sim_spearman
value: 60.664116225735164
- type: euclidean_pearson
value: 33.87382555421702
- type: euclidean_spearman
value: 55.74649067458667
- type: manhattan_pearson
value: 33.99117246759437
- type: manhattan_spearman
value: 55.98749034923899
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.06497233105448
- type: cos_sim_spearman
value: 65.62968801135676
- type: euclidean_pearson
value: 47.482076613243905
- type: euclidean_spearman
value: 62.65137791498299
- type: manhattan_pearson
value: 47.57052626104093
- type: manhattan_spearman
value: 62.436916516613294
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.49397298562575
- type: cos_sim_spearman
value: 74.79604041187868
- type: euclidean_pearson
value: 49.661891561317795
- type: euclidean_spearman
value: 70.31535537621006
- type: manhattan_pearson
value: 49.553715741850006
- type: manhattan_spearman
value: 70.24779344636806
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.640574515348696
- type: cos_sim_spearman
value: 54.927959317689
- type: euclidean_pearson
value: 29.00139666967476
- type: euclidean_spearman
value: 41.86386566971605
- type: manhattan_pearson
value: 29.47411067730344
- type: manhattan_spearman
value: 42.337438424952786
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.14095292259312
- type: cos_sim_spearman
value: 73.99017581234789
- type: euclidean_pearson
value: 46.46304297872084
- type: euclidean_spearman
value: 60.91834114800041
- type: manhattan_pearson
value: 47.07072666338692
- type: manhattan_spearman
value: 61.70415727977926
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.27184653359575
- type: cos_sim_spearman
value: 77.76070252418626
- type: euclidean_pearson
value: 62.30586577544778
- type: euclidean_spearman
value: 75.14246629110978
- type: manhattan_pearson
value: 62.328196884927046
- type: manhattan_spearman
value: 75.1282792981433
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.59448528829957
- type: cos_sim_spearman
value: 70.37277734222123
- type: euclidean_pearson
value: 57.63145565721123
- type: euclidean_spearman
value: 66.10113048304427
- type: manhattan_pearson
value: 57.18897811586808
- type: manhattan_spearman
value: 66.5595511215901
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.37520607720838
- type: cos_sim_spearman
value: 69.92282148997948
- type: euclidean_pearson
value: 40.55768770125291
- type: euclidean_spearman
value: 55.189128944669605
- type: manhattan_pearson
value: 41.03566433468883
- type: manhattan_spearman
value: 55.61251893174558
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.791929533771835
- type: cos_sim_spearman
value: 66.45819707662093
- type: euclidean_pearson
value: 39.03686018511092
- type: euclidean_spearman
value: 56.01282695640428
- type: manhattan_pearson
value: 38.91586623619632
- type: manhattan_spearman
value: 56.69394943612747
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.82224468473866
- type: cos_sim_spearman
value: 59.467307194781164
- type: euclidean_pearson
value: 27.428459190256145
- type: euclidean_spearman
value: 60.83463107397519
- type: manhattan_pearson
value: 27.487391578496638
- type: manhattan_spearman
value: 61.281380460246496
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 16.306666792752644
- type: cos_sim_spearman
value: 39.35486427252405
- type: euclidean_pearson
value: -2.7887154897955435
- type: euclidean_spearman
value: 27.1296051831719
- type: manhattan_pearson
value: -3.202291270581297
- type: manhattan_spearman
value: 26.32895849218158
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.67006803805076
- type: cos_sim_spearman
value: 73.24670207647144
- type: euclidean_pearson
value: 46.91884681500483
- type: euclidean_spearman
value: 16.903085094570333
- type: manhattan_pearson
value: 46.88391675325812
- type: manhattan_spearman
value: 28.17180849095055
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 83.79555591223837
- type: cos_sim_spearman
value: 85.63658602085185
- type: euclidean_pearson
value: 85.22080894037671
- type: euclidean_spearman
value: 85.54113580167038
- type: manhattan_pearson
value: 85.1639505960118
- type: manhattan_spearman
value: 85.43502665436196
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.73900991689766
- type: mrr
value: 94.81624131133934
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.678000000000004
- type: map_at_10
value: 65.135
- type: map_at_100
value: 65.824
- type: map_at_1000
value: 65.852
- type: map_at_3
value: 62.736000000000004
- type: map_at_5
value: 64.411
- type: mrr_at_1
value: 58.333
- type: mrr_at_10
value: 66.5
- type: mrr_at_100
value: 67.053
- type: mrr_at_1000
value: 67.08
- type: mrr_at_3
value: 64.944
- type: mrr_at_5
value: 65.89399999999999
- type: ndcg_at_1
value: 58.333
- type: ndcg_at_10
value: 69.34700000000001
- type: ndcg_at_100
value: 72.32
- type: ndcg_at_1000
value: 73.014
- type: ndcg_at_3
value: 65.578
- type: ndcg_at_5
value: 67.738
- type: precision_at_1
value: 58.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 55.678000000000004
- type: recall_at_10
value: 80.72200000000001
- type: recall_at_100
value: 93.93299999999999
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 70.783
- type: recall_at_5
value: 75.978
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74653465346535
- type: cos_sim_ap
value: 93.01476369929063
- type: cos_sim_f1
value: 86.93009118541033
- type: cos_sim_precision
value: 88.09034907597535
- type: cos_sim_recall
value: 85.8
- type: dot_accuracy
value: 99.22970297029703
- type: dot_ap
value: 51.58725659485144
- type: dot_f1
value: 53.51351351351352
- type: dot_precision
value: 58.235294117647065
- type: dot_recall
value: 49.5
- type: euclidean_accuracy
value: 99.74356435643564
- type: euclidean_ap
value: 92.40332894384368
- type: euclidean_f1
value: 86.97838109602817
- type: euclidean_precision
value: 87.46208291203236
- type: euclidean_recall
value: 86.5
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 92.01320815721121
- type: manhattan_f1
value: 86.4135864135864
- type: manhattan_precision
value: 86.32734530938124
- type: manhattan_recall
value: 86.5
- type: max_accuracy
value: 99.74653465346535
- type: max_ap
value: 93.01476369929063
- type: max_f1
value: 86.97838109602817
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 55.2660514302523
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.4637783572547
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.41377758357637
- type: mrr
value: 50.138451213818854
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.887846011166594
- type: cos_sim_spearman
value: 30.10823258355903
- type: dot_pearson
value: 12.888049550236385
- type: dot_spearman
value: 12.827495903098123
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.21
- type: map_at_10
value: 1.667
- type: map_at_100
value: 9.15
- type: map_at_1000
value: 22.927
- type: map_at_3
value: 0.573
- type: map_at_5
value: 0.915
- type: mrr_at_1
value: 80
- type: mrr_at_10
value: 87.167
- type: mrr_at_100
value: 87.167
- type: mrr_at_1000
value: 87.167
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 87.167
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 69.757
- type: ndcg_at_100
value: 52.402
- type: ndcg_at_1000
value: 47.737
- type: ndcg_at_3
value: 71.866
- type: ndcg_at_5
value: 72.225
- type: precision_at_1
value: 80
- type: precision_at_10
value: 75
- type: precision_at_100
value: 53.959999999999994
- type: precision_at_1000
value: 21.568
- type: precision_at_3
value: 76.667
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.21
- type: recall_at_10
value: 1.9189999999999998
- type: recall_at_100
value: 12.589
- type: recall_at_1000
value: 45.312000000000005
- type: recall_at_3
value: 0.61
- type: recall_at_5
value: 1.019
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 90.06
- type: precision
value: 89.17333333333333
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.06936416184971
- type: f1
value: 50.87508028259473
- type: precision
value: 48.97398843930635
- type: recall
value: 56.06936416184971
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.3170731707317
- type: f1
value: 52.96080139372822
- type: precision
value: 51.67861124382864
- type: recall
value: 57.3170731707317
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.67333333333333
- type: precision
value: 91.90833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97.07333333333332
- type: precision
value: 96.79500000000002
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.2
- type: precision
value: 92.48333333333333
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.9
- type: f1
value: 91.26666666666667
- type: precision
value: 90.59444444444445
- type: recall
value: 92.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 34.32835820895522
- type: f1
value: 29.074180380150533
- type: precision
value: 28.068207322920596
- type: recall
value: 34.32835820895522
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.5
- type: f1
value: 74.3945115995116
- type: precision
value: 72.82967843459222
- type: recall
value: 78.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34146341463415
- type: f1
value: 61.2469400518181
- type: precision
value: 59.63977756660683
- type: recall
value: 66.34146341463415
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.9
- type: f1
value: 76.90349206349207
- type: precision
value: 75.32921568627451
- type: recall
value: 80.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.93317132442284
- type: f1
value: 81.92519105034295
- type: precision
value: 80.71283920615635
- type: recall
value: 84.93317132442284
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.1304347826087
- type: f1
value: 65.22394755003451
- type: precision
value: 62.912422360248435
- type: recall
value: 71.1304347826087
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.82608695652173
- type: f1
value: 75.55693581780538
- type: precision
value: 73.79420289855072
- type: recall
value: 79.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74
- type: f1
value: 70.51022222222223
- type: precision
value: 69.29673599347512
- type: recall
value: 74
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 74.14238095238095
- type: precision
value: 72.27214285714285
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.97466827503016
- type: f1
value: 43.080330405420874
- type: precision
value: 41.36505499593557
- type: recall
value: 48.97466827503016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.60000000000001
- type: f1
value: 86.62333333333333
- type: precision
value: 85.225
- type: recall
value: 89.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.2
- type: f1
value: 39.5761253006253
- type: precision
value: 37.991358436312
- type: recall
value: 45.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.70333333333333
- type: precision
value: 85.53166666666667
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.095238095238095
- type: f1
value: 44.60650460650461
- type: precision
value: 42.774116796477045
- type: recall
value: 50.095238095238095
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.4
- type: f1
value: 58.35967261904762
- type: precision
value: 56.54857142857143
- type: recall
value: 63.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 87.075
- type: precision
value: 86.12095238095239
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.90333333333334
- type: precision
value: 95.50833333333333
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.9
- type: f1
value: 88.6288888888889
- type: precision
value: 87.61607142857142
- type: recall
value: 90.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.2
- type: f1
value: 60.54377630539395
- type: precision
value: 58.89434482711381
- type: recall
value: 65.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87
- type: f1
value: 84.32412698412699
- type: precision
value: 83.25527777777778
- type: recall
value: 87
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.7
- type: f1
value: 63.07883541295306
- type: precision
value: 61.06117424242426
- type: recall
value: 68.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.78333333333335
- type: precision
value: 90.86666666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 96.96666666666667
- type: precision
value: 96.61666666666667
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27493261455525
- type: f1
value: 85.90745732255168
- type: precision
value: 84.91389637616052
- type: recall
value: 88.27493261455525
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5982905982906
- type: f1
value: 88.4900284900285
- type: precision
value: 87.57122507122507
- type: recall
value: 90.5982905982906
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.90769841269842
- type: precision
value: 85.80178571428571
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.5
- type: f1
value: 78.36796536796538
- type: precision
value: 76.82196969696969
- type: recall
value: 82.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.48846960167715
- type: f1
value: 66.78771089148448
- type: precision
value: 64.98302885095339
- type: recall
value: 71.48846960167715
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.50333333333333
- type: precision
value: 91.77499999999999
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.20622568093385
- type: f1
value: 66.83278891450098
- type: precision
value: 65.35065777283677
- type: recall
value: 71.20622568093385
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.717948717948715
- type: f1
value: 43.53146853146853
- type: precision
value: 42.04721204721204
- type: recall
value: 48.717948717948715
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.5
- type: f1
value: 53.8564991863928
- type: precision
value: 52.40329436122275
- type: recall
value: 58.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.8
- type: f1
value: 88.29
- type: precision
value: 87.09166666666667
- type: recall
value: 90.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.28971962616822
- type: f1
value: 62.63425307817832
- type: precision
value: 60.98065939771546
- type: recall
value: 67.28971962616822
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 75.5264472455649
- type: precision
value: 74.38205086580086
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.7
- type: f1
value: 86.10809523809525
- type: precision
value: 85.07602564102565
- type: recall
value: 88.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.99999999999999
- type: f1
value: 52.85487521402737
- type: precision
value: 51.53985162713104
- type: recall
value: 56.99999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94
- type: f1
value: 92.45333333333333
- type: precision
value: 91.79166666666667
- type: recall
value: 94
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.61333333333333
- type: precision
value: 89.83333333333331
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34555555555555
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.2
- type: f1
value: 76.6563035113035
- type: precision
value: 75.3014652014652
- type: recall
value: 80.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.7
- type: f1
value: 82.78689263765207
- type: precision
value: 82.06705086580087
- type: recall
value: 84.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.33333333333333
- type: f1
value: 45.461523661523664
- type: precision
value: 43.93545574795575
- type: recall
value: 50.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.6000000000000005
- type: f1
value: 5.442121400446441
- type: precision
value: 5.146630385487529
- type: recall
value: 6.6000000000000005
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85
- type: f1
value: 81.04666666666667
- type: precision
value: 79.25
- type: recall
value: 85
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.32142857142857
- type: f1
value: 42.333333333333336
- type: precision
value: 40.69196428571429
- type: recall
value: 47.32142857142857
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 30.735455543358945
- type: f1
value: 26.73616790022338
- type: precision
value: 25.397823220451283
- type: recall
value: 30.735455543358945
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 25.1
- type: f1
value: 21.975989896371022
- type: precision
value: 21.059885632257203
- type: recall
value: 25.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.75666666666666
- type: precision
value: 92.06166666666665
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.74
- type: precision
value: 92.09166666666667
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.3
- type: f1
value: 66.922442002442
- type: precision
value: 65.38249567099568
- type: recall
value: 71.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 40.300000000000004
- type: f1
value: 35.78682789299971
- type: precision
value: 34.66425128716588
- type: recall
value: 40.300000000000004
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.82333333333334
- type: precision
value: 94.27833333333334
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 51.1
- type: f1
value: 47.179074753133584
- type: precision
value: 46.06461044702424
- type: recall
value: 51.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.7
- type: f1
value: 84.71
- type: precision
value: 83.46166666666667
- type: recall
value: 87.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.68333333333334
- type: precision
value: 94.13333333333334
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.39999999999999
- type: f1
value: 82.5577380952381
- type: precision
value: 81.36833333333334
- type: recall
value: 85.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.16788321167883
- type: f1
value: 16.948865627297987
- type: precision
value: 15.971932568647897
- type: recall
value: 21.16788321167883
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.9
- type: f1
value: 5.515526831658907
- type: precision
value: 5.141966366966367
- type: recall
value: 6.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39666666666668
- type: precision
value: 90.58666666666667
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 89.95666666666666
- type: precision
value: 88.92833333333333
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.76190476190477
- type: f1
value: 74.93386243386244
- type: precision
value: 73.11011904761904
- type: recall
value: 79.76190476190477
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.799999999999999
- type: f1
value: 6.921439712248537
- type: precision
value: 6.489885109680683
- type: recall
value: 8.799999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.75569358178054
- type: f1
value: 40.34699501312631
- type: precision
value: 38.57886764719063
- type: recall
value: 45.75569358178054
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 89.08333333333333
- type: precision
value: 88.01666666666668
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.06690476190477
- type: precision
value: 91.45095238095239
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 7.5
- type: f1
value: 6.200363129378736
- type: precision
value: 5.89115314822466
- type: recall
value: 7.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.59307359307358
- type: f1
value: 68.38933553219267
- type: precision
value: 66.62698412698413
- type: recall
value: 73.59307359307358
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.8473282442748
- type: f1
value: 64.72373682297346
- type: precision
value: 62.82834214131924
- type: recall
value: 69.8473282442748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5254730713246
- type: f1
value: 96.72489082969432
- type: precision
value: 96.33672974284326
- type: recall
value: 97.5254730713246
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.6
- type: f1
value: 72.42746031746033
- type: precision
value: 71.14036630036631
- type: recall
value: 75.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.24293785310734
- type: f1
value: 88.86064030131826
- type: precision
value: 87.73540489642184
- type: recall
value: 91.24293785310734
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.2
- type: f1
value: 4.383083659794954
- type: precision
value: 4.027861324289673
- type: recall
value: 6.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 84.09428571428572
- type: precision
value: 83.00333333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.699999999999996
- type: f1
value: 56.1584972394755
- type: precision
value: 54.713456330903135
- type: recall
value: 60.699999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.2
- type: f1
value: 80.66190476190475
- type: precision
value: 79.19690476190476
- type: recall
value: 84.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.33
- type: precision
value: 90.45
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.3
- type: f1
value: 5.126828976748276
- type: precision
value: 4.853614328966668
- type: recall
value: 6.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.76943699731903
- type: f1
value: 77.82873739308057
- type: precision
value: 76.27622452019234
- type: recall
value: 81.76943699731903
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.29666666666665
- type: precision
value: 89.40333333333334
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 29.249011857707508
- type: f1
value: 24.561866096392947
- type: precision
value: 23.356583740215456
- type: recall
value: 29.249011857707508
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.23943661971832
- type: precision
value: 71.66666666666667
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.35928143712575
- type: f1
value: 15.997867865075824
- type: precision
value: 14.882104658301346
- type: recall
value: 20.35928143712575
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 90.25999999999999
- type: precision
value: 89.45333333333335
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.15270935960591
- type: f1
value: 19.65673625772148
- type: precision
value: 18.793705293464992
- type: recall
value: 23.15270935960591
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.154929577464785
- type: f1
value: 52.3868463305083
- type: precision
value: 50.14938113529662
- type: recall
value: 59.154929577464785
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.51282051282051
- type: f1
value: 66.8089133089133
- type: precision
value: 65.37645687645687
- type: recall
value: 70.51282051282051
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93
- type: precision
value: 92.23333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.62212943632568
- type: f1
value: 34.3278276962583
- type: precision
value: 33.07646935732408
- type: recall
value: 38.62212943632568
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 28.1
- type: f1
value: 23.579609223054604
- type: precision
value: 22.39622774921555
- type: recall
value: 28.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27361563517914
- type: f1
value: 85.12486427795874
- type: precision
value: 83.71335504885994
- type: recall
value: 88.27361563517914
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.6
- type: f1
value: 86.39928571428571
- type: precision
value: 85.4947557997558
- type: recall
value: 88.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.77952380952381
- type: precision
value: 82.67602564102565
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.52755905511812
- type: f1
value: 75.3055868016498
- type: precision
value: 73.81889763779527
- type: recall
value: 79.52755905511812
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.76261904761905
- type: precision
value: 72.11670995670995
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 53.8781163434903
- type: f1
value: 47.25804051288816
- type: precision
value: 45.0603482390186
- type: recall
value: 53.8781163434903
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.10000000000001
- type: f1
value: 88.88
- type: precision
value: 87.96333333333334
- type: recall
value: 91.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.46153846153847
- type: f1
value: 34.43978243978244
- type: precision
value: 33.429487179487175
- type: recall
value: 38.46153846153847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.9
- type: f1
value: 86.19888888888887
- type: precision
value: 85.07440476190476
- type: recall
value: 88.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.9
- type: f1
value: 82.58857142857143
- type: precision
value: 81.15666666666667
- type: recall
value: 85.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 83.36999999999999
- type: precision
value: 81.86833333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.51415094339622
- type: f1
value: 63.195000099481234
- type: precision
value: 61.394033442972116
- type: recall
value: 68.51415094339622
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 86.14603174603175
- type: precision
value: 85.1162037037037
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.62043795620438
- type: f1
value: 94.40389294403892
- type: precision
value: 93.7956204379562
- type: recall
value: 95.62043795620438
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.8
- type: f1
value: 78.6532178932179
- type: precision
value: 77.46348795840176
- type: recall
value: 81.8
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.603
- type: map_at_10
value: 8.5
- type: map_at_100
value: 12.985
- type: map_at_1000
value: 14.466999999999999
- type: map_at_3
value: 4.859999999999999
- type: map_at_5
value: 5.817
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 42.331
- type: mrr_at_100
value: 43.592999999999996
- type: mrr_at_1000
value: 43.592999999999996
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 39.966
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 21.353
- type: ndcg_at_100
value: 31.087999999999997
- type: ndcg_at_1000
value: 43.163000000000004
- type: ndcg_at_3
value: 22.999
- type: ndcg_at_5
value: 21.451
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 19.387999999999998
- type: precision_at_100
value: 6.265
- type: precision_at_1000
value: 1.4160000000000001
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 21.224
- type: recall_at_1
value: 2.603
- type: recall_at_10
value: 14.474
- type: recall_at_100
value: 40.287
- type: recall_at_1000
value: 76.606
- type: recall_at_3
value: 5.978
- type: recall_at_5
value: 7.819
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.7848
- type: ap
value: 13.661023167088224
- type: f1
value: 53.61686134460943
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.28183361629882
- type: f1
value: 61.55481034919965
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 35.972128420092396
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59933241938367
- type: cos_sim_ap
value: 72.20760361208136
- type: cos_sim_f1
value: 66.4447731755424
- type: cos_sim_precision
value: 62.35539102267469
- type: cos_sim_recall
value: 71.10817941952506
- type: dot_accuracy
value: 78.98313166835548
- type: dot_ap
value: 44.492521645493795
- type: dot_f1
value: 45.814889336016094
- type: dot_precision
value: 37.02439024390244
- type: dot_recall
value: 60.07915567282321
- type: euclidean_accuracy
value: 85.3907134767837
- type: euclidean_ap
value: 71.53847289080343
- type: euclidean_f1
value: 65.95952206778834
- type: euclidean_precision
value: 61.31006346328196
- type: euclidean_recall
value: 71.37203166226914
- type: manhattan_accuracy
value: 85.40859510043511
- type: manhattan_ap
value: 71.49664104395515
- type: manhattan_f1
value: 65.98569969356485
- type: manhattan_precision
value: 63.928748144482924
- type: manhattan_recall
value: 68.17941952506597
- type: max_accuracy
value: 85.59933241938367
- type: max_ap
value: 72.20760361208136
- type: max_f1
value: 66.4447731755424
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.83261536073273
- type: cos_sim_ap
value: 85.48178133644264
- type: cos_sim_f1
value: 77.87816307403935
- type: cos_sim_precision
value: 75.88953021114926
- type: cos_sim_recall
value: 79.97382198952879
- type: dot_accuracy
value: 79.76287499514883
- type: dot_ap
value: 59.17438838475084
- type: dot_f1
value: 56.34566667855996
- type: dot_precision
value: 52.50349092359864
- type: dot_recall
value: 60.794579611949494
- type: euclidean_accuracy
value: 88.76857996662397
- type: euclidean_ap
value: 85.22764834359887
- type: euclidean_f1
value: 77.65379751543554
- type: euclidean_precision
value: 75.11152683839401
- type: euclidean_recall
value: 80.37419156144134
- type: manhattan_accuracy
value: 88.6987231730508
- type: manhattan_ap
value: 85.18907981724007
- type: manhattan_f1
value: 77.51967028849757
- type: manhattan_precision
value: 75.49992701795358
- type: manhattan_recall
value: 79.65044656606098
- type: max_accuracy
value: 88.83261536073273
- type: max_ap
value: 85.48178133644264
- type: max_f1
value: 77.87816307403935
---
## Multilingual-E5-base
[Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 12 layers and the embedding size is 768.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-base')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-base')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
**Initialization**: [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
**First stage**: contrastive pre-training with weak supervision
| Dataset | Weak supervision | # of text pairs |
|--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------|
| Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B |
| [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M |
| [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B |
| [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M |
| Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M |
| [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M |
| [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M |
| [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M |
| [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M |
**Second stage**: supervised fine-tuning
| Dataset | Language | # of text pairs |
|----------------------------------------------------------------------------------------|--------------|-----------------|
| [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k |
| [NQ](https://github.com/facebookresearch/DPR) | English | 70k |
| [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k |
| [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k |
| [ELI5](https://huggingface.co/datasets/eli5) | English | 500k |
| [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k |
| [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [SQuAD](https://huggingface.co/datasets/squad) | English | 87k |
| [Quora](https://huggingface.co/datasets/quora) | English | 150k |
| [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k |
| [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k |
For all labeled datasets, we only use its training set for fine-tuning.
For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672).
## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
| Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th |
|-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- |
| BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 |
| mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 |
| BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 |
| | |
| multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 |
| multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 |
| multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 |
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/multilingual-e5-base')
input_texts = [
'query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2024multilingual,
title={Multilingual E5 Text Embeddings: A Technical Report},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2402.05672},
year={2024}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens.
| [
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Snowflake/snowflake-arctic-embed-l | Snowflake | sentence-similarity | [
"sentence-transformers",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"arctic",
"snowflake-arctic-embed",
"transformers.js",
"arxiv:2407.18887",
"arxiv:2405.05374",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,712 | 1,734 | 26,313 | 91 | ---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- arctic
- snowflake-arctic-embed
- transformers.js
new_version: Snowflake/snowflake-arctic-embed-l-v2.0
model-index:
- name: snowflake-arctic-embed-l
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.80597014925374
- type: ap
value: 37.911466766189875
- type: f1
value: 68.88606927542106
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 78.402275
- type: ap
value: 73.03294793248114
- type: f1
value: 78.3147786132161
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 36.717999999999996
- type: f1
value: 35.918044248787766
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 34.495
- type: map_at_10
value: 50.236000000000004
- type: map_at_100
value: 50.944
- type: map_at_1000
value: 50.94499999999999
- type: map_at_3
value: 45.341
- type: map_at_5
value: 48.286
- type: mrr_at_1
value: 35.135
- type: mrr_at_10
value: 50.471
- type: mrr_at_100
value: 51.185
- type: mrr_at_1000
value: 51.187000000000005
- type: mrr_at_3
value: 45.602
- type: mrr_at_5
value: 48.468
- type: ndcg_at_1
value: 34.495
- type: ndcg_at_10
value: 59.086000000000006
- type: ndcg_at_100
value: 61.937
- type: ndcg_at_1000
value: 61.966
- type: ndcg_at_3
value: 49.062
- type: ndcg_at_5
value: 54.367
- type: precision_at_1
value: 34.495
- type: precision_at_10
value: 8.734
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.962
- type: precision_at_5
value: 14.552000000000001
- type: recall_at_1
value: 34.495
- type: recall_at_10
value: 87.33999999999999
- type: recall_at_100
value: 99.431
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 59.885999999999996
- type: recall_at_5
value: 72.76
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.46440874635501
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.28720154213723
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.34614226394902
- type: mrr
value: 75.05628105351096
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.41072716728198
- type: cos_sim_spearman
value: 86.34534093114372
- type: euclidean_pearson
value: 85.34009667750838
- type: euclidean_spearman
value: 86.34534093114372
- type: manhattan_pearson
value: 85.2158833586889
- type: manhattan_spearman
value: 86.60920236509224
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.06493506493507
- type: f1
value: 79.28108600339833
- task:
type: Clustering
dataset:
name: MTEB BigPatentClustering
type: jinaai/big-patent-clustering
config: default
split: test
revision: 62d5330920bca426ce9d3c76ea914f15fc83e891
metrics:
- type: v_measure
value: 20.545049432417287
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.54369718479804
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.64941588219162
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 37.264
- type: map_at_10
value: 49.43
- type: map_at_100
value: 50.967
- type: map_at_1000
value: 51.08200000000001
- type: map_at_3
value: 45.742
- type: map_at_5
value: 47.764
- type: mrr_at_1
value: 44.921
- type: mrr_at_10
value: 54.879999999999995
- type: mrr_at_100
value: 55.525000000000006
- type: mrr_at_1000
value: 55.565
- type: mrr_at_3
value: 52.480000000000004
- type: mrr_at_5
value: 53.86
- type: ndcg_at_1
value: 44.921
- type: ndcg_at_10
value: 55.664
- type: ndcg_at_100
value: 60.488
- type: ndcg_at_1000
value: 62.138000000000005
- type: ndcg_at_3
value: 50.797000000000004
- type: ndcg_at_5
value: 52.94799999999999
- type: precision_at_1
value: 44.921
- type: precision_at_10
value: 10.587
- type: precision_at_100
value: 1.629
- type: precision_at_1000
value: 0.203
- type: precision_at_3
value: 24.034
- type: precision_at_5
value: 17.224999999999998
- type: recall_at_1
value: 37.264
- type: recall_at_10
value: 67.15
- type: recall_at_100
value: 86.811
- type: recall_at_1000
value: 97.172
- type: recall_at_3
value: 53.15800000000001
- type: recall_at_5
value: 59.116
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 36.237
- type: map_at_10
value: 47.941
- type: map_at_100
value: 49.131
- type: map_at_1000
value: 49.26
- type: map_at_3
value: 44.561
- type: map_at_5
value: 46.28
- type: mrr_at_1
value: 45.605000000000004
- type: mrr_at_10
value: 54.039
- type: mrr_at_100
value: 54.653
- type: mrr_at_1000
value: 54.688
- type: mrr_at_3
value: 52.006
- type: mrr_at_5
value: 53.096
- type: ndcg_at_1
value: 45.605000000000004
- type: ndcg_at_10
value: 53.916
- type: ndcg_at_100
value: 57.745999999999995
- type: ndcg_at_1000
value: 59.492999999999995
- type: ndcg_at_3
value: 49.774
- type: ndcg_at_5
value: 51.434999999999995
- type: precision_at_1
value: 45.605000000000004
- type: precision_at_10
value: 10.229000000000001
- type: precision_at_100
value: 1.55
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 24.098
- type: precision_at_5
value: 16.726
- type: recall_at_1
value: 36.237
- type: recall_at_10
value: 64.03
- type: recall_at_100
value: 80.423
- type: recall_at_1000
value: 91.03
- type: recall_at_3
value: 51.20400000000001
- type: recall_at_5
value: 56.298
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 47.278
- type: map_at_10
value: 59.757000000000005
- type: map_at_100
value: 60.67
- type: map_at_1000
value: 60.714
- type: map_at_3
value: 56.714
- type: map_at_5
value: 58.453
- type: mrr_at_1
value: 53.73
- type: mrr_at_10
value: 62.970000000000006
- type: mrr_at_100
value: 63.507999999999996
- type: mrr_at_1000
value: 63.53
- type: mrr_at_3
value: 60.909
- type: mrr_at_5
value: 62.172000000000004
- type: ndcg_at_1
value: 53.73
- type: ndcg_at_10
value: 64.97
- type: ndcg_at_100
value: 68.394
- type: ndcg_at_1000
value: 69.255
- type: ndcg_at_3
value: 60.228
- type: ndcg_at_5
value: 62.617999999999995
- type: precision_at_1
value: 53.73
- type: precision_at_10
value: 10.056
- type: precision_at_100
value: 1.265
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 26.332
- type: precision_at_5
value: 17.743000000000002
- type: recall_at_1
value: 47.278
- type: recall_at_10
value: 76.86500000000001
- type: recall_at_100
value: 91.582
- type: recall_at_1000
value: 97.583
- type: recall_at_3
value: 64.443
- type: recall_at_5
value: 70.283
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 29.702
- type: map_at_10
value: 39.463
- type: map_at_100
value: 40.508
- type: map_at_1000
value: 40.579
- type: map_at_3
value: 36.748999999999995
- type: map_at_5
value: 38.296
- type: mrr_at_1
value: 31.977
- type: mrr_at_10
value: 41.739
- type: mrr_at_100
value: 42.586
- type: mrr_at_1000
value: 42.636
- type: mrr_at_3
value: 39.096
- type: mrr_at_5
value: 40.695
- type: ndcg_at_1
value: 31.977
- type: ndcg_at_10
value: 44.855000000000004
- type: ndcg_at_100
value: 49.712
- type: ndcg_at_1000
value: 51.443000000000005
- type: ndcg_at_3
value: 39.585
- type: ndcg_at_5
value: 42.244
- type: precision_at_1
value: 31.977
- type: precision_at_10
value: 6.768000000000001
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 16.761
- type: precision_at_5
value: 11.593
- type: recall_at_1
value: 29.702
- type: recall_at_10
value: 59.082
- type: recall_at_100
value: 80.92
- type: recall_at_1000
value: 93.728
- type: recall_at_3
value: 45.212
- type: recall_at_5
value: 51.449
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 21.336
- type: map_at_10
value: 30.137999999999998
- type: map_at_100
value: 31.385
- type: map_at_1000
value: 31.495
- type: map_at_3
value: 27.481
- type: map_at_5
value: 28.772
- type: mrr_at_1
value: 25.871
- type: mrr_at_10
value: 34.686
- type: mrr_at_100
value: 35.649
- type: mrr_at_1000
value: 35.705
- type: mrr_at_3
value: 32.09
- type: mrr_at_5
value: 33.52
- type: ndcg_at_1
value: 25.871
- type: ndcg_at_10
value: 35.617
- type: ndcg_at_100
value: 41.272999999999996
- type: ndcg_at_1000
value: 43.725
- type: ndcg_at_3
value: 30.653999999999996
- type: ndcg_at_5
value: 32.714
- type: precision_at_1
value: 25.871
- type: precision_at_10
value: 6.4799999999999995
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 14.469000000000001
- type: precision_at_5
value: 10.274
- type: recall_at_1
value: 21.336
- type: recall_at_10
value: 47.746
- type: recall_at_100
value: 71.773
- type: recall_at_1000
value: 89.05199999999999
- type: recall_at_3
value: 34.172999999999995
- type: recall_at_5
value: 39.397999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 34.424
- type: map_at_10
value: 45.647999999999996
- type: map_at_100
value: 46.907
- type: map_at_1000
value: 47.010999999999996
- type: map_at_3
value: 42.427
- type: map_at_5
value: 44.285000000000004
- type: mrr_at_1
value: 41.867
- type: mrr_at_10
value: 51.17699999999999
- type: mrr_at_100
value: 51.937
- type: mrr_at_1000
value: 51.975
- type: mrr_at_3
value: 48.941
- type: mrr_at_5
value: 50.322
- type: ndcg_at_1
value: 41.867
- type: ndcg_at_10
value: 51.534
- type: ndcg_at_100
value: 56.696999999999996
- type: ndcg_at_1000
value: 58.475
- type: ndcg_at_3
value: 46.835
- type: ndcg_at_5
value: 49.161
- type: precision_at_1
value: 41.867
- type: precision_at_10
value: 9.134
- type: precision_at_100
value: 1.362
- type: precision_at_1000
value: 0.17099999999999999
- type: precision_at_3
value: 22.073
- type: precision_at_5
value: 15.495999999999999
- type: recall_at_1
value: 34.424
- type: recall_at_10
value: 63.237
- type: recall_at_100
value: 84.774
- type: recall_at_1000
value: 95.987
- type: recall_at_3
value: 49.888
- type: recall_at_5
value: 55.940999999999995
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 30.72
- type: map_at_10
value: 41.327999999999996
- type: map_at_100
value: 42.651
- type: map_at_1000
value: 42.739
- type: map_at_3
value: 38.223
- type: map_at_5
value: 40.053
- type: mrr_at_1
value: 37.9
- type: mrr_at_10
value: 46.857
- type: mrr_at_100
value: 47.673
- type: mrr_at_1000
value: 47.711999999999996
- type: mrr_at_3
value: 44.292
- type: mrr_at_5
value: 45.845
- type: ndcg_at_1
value: 37.9
- type: ndcg_at_10
value: 47.105999999999995
- type: ndcg_at_100
value: 52.56999999999999
- type: ndcg_at_1000
value: 54.37800000000001
- type: ndcg_at_3
value: 42.282
- type: ndcg_at_5
value: 44.646
- type: precision_at_1
value: 37.9
- type: precision_at_10
value: 8.368
- type: precision_at_100
value: 1.283
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 20.015
- type: precision_at_5
value: 14.132
- type: recall_at_1
value: 30.72
- type: recall_at_10
value: 58.826
- type: recall_at_100
value: 82.104
- type: recall_at_1000
value: 94.194
- type: recall_at_3
value: 44.962999999999994
- type: recall_at_5
value: 51.426
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 31.656583333333334
- type: map_at_10
value: 41.59883333333333
- type: map_at_100
value: 42.80350000000001
- type: map_at_1000
value: 42.91075
- type: map_at_3
value: 38.68908333333333
- type: map_at_5
value: 40.27733333333334
- type: mrr_at_1
value: 37.23483333333334
- type: mrr_at_10
value: 45.782000000000004
- type: mrr_at_100
value: 46.577083333333334
- type: mrr_at_1000
value: 46.62516666666667
- type: mrr_at_3
value: 43.480666666666664
- type: mrr_at_5
value: 44.79833333333333
- type: ndcg_at_1
value: 37.23483333333334
- type: ndcg_at_10
value: 46.971500000000006
- type: ndcg_at_100
value: 51.90125
- type: ndcg_at_1000
value: 53.86366666666667
- type: ndcg_at_3
value: 42.31791666666667
- type: ndcg_at_5
value: 44.458666666666666
- type: precision_at_1
value: 37.23483333333334
- type: precision_at_10
value: 8.044583333333332
- type: precision_at_100
value: 1.2334166666666666
- type: precision_at_1000
value: 0.15925
- type: precision_at_3
value: 19.240833333333327
- type: precision_at_5
value: 13.435083333333333
- type: recall_at_1
value: 31.656583333333334
- type: recall_at_10
value: 58.44758333333333
- type: recall_at_100
value: 79.93658333333332
- type: recall_at_1000
value: 93.32491666666668
- type: recall_at_3
value: 45.44266666666667
- type: recall_at_5
value: 50.99866666666666
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 28.247
- type: map_at_10
value: 35.443999999999996
- type: map_at_100
value: 36.578
- type: map_at_1000
value: 36.675999999999995
- type: map_at_3
value: 33.276
- type: map_at_5
value: 34.536
- type: mrr_at_1
value: 31.747999999999998
- type: mrr_at_10
value: 38.413000000000004
- type: mrr_at_100
value: 39.327
- type: mrr_at_1000
value: 39.389
- type: mrr_at_3
value: 36.401
- type: mrr_at_5
value: 37.543
- type: ndcg_at_1
value: 31.747999999999998
- type: ndcg_at_10
value: 39.646
- type: ndcg_at_100
value: 44.861000000000004
- type: ndcg_at_1000
value: 47.197
- type: ndcg_at_3
value: 35.764
- type: ndcg_at_5
value: 37.635999999999996
- type: precision_at_1
value: 31.747999999999998
- type: precision_at_10
value: 6.12
- type: precision_at_100
value: 0.942
- type: precision_at_1000
value: 0.123
- type: precision_at_3
value: 15.235000000000001
- type: precision_at_5
value: 10.491
- type: recall_at_1
value: 28.247
- type: recall_at_10
value: 49.456
- type: recall_at_100
value: 73.02499999999999
- type: recall_at_1000
value: 89.898
- type: recall_at_3
value: 38.653999999999996
- type: recall_at_5
value: 43.259
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 22.45
- type: map_at_10
value: 30.476999999999997
- type: map_at_100
value: 31.630999999999997
- type: map_at_1000
value: 31.755
- type: map_at_3
value: 27.989000000000004
- type: map_at_5
value: 29.410999999999998
- type: mrr_at_1
value: 26.979
- type: mrr_at_10
value: 34.316
- type: mrr_at_100
value: 35.272999999999996
- type: mrr_at_1000
value: 35.342
- type: mrr_at_3
value: 32.14
- type: mrr_at_5
value: 33.405
- type: ndcg_at_1
value: 26.979
- type: ndcg_at_10
value: 35.166
- type: ndcg_at_100
value: 40.583000000000006
- type: ndcg_at_1000
value: 43.282
- type: ndcg_at_3
value: 30.916
- type: ndcg_at_5
value: 32.973
- type: precision_at_1
value: 26.979
- type: precision_at_10
value: 6.132
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 14.360999999999999
- type: precision_at_5
value: 10.227
- type: recall_at_1
value: 22.45
- type: recall_at_10
value: 45.348
- type: recall_at_100
value: 69.484
- type: recall_at_1000
value: 88.628
- type: recall_at_3
value: 33.338
- type: recall_at_5
value: 38.746
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 32.123000000000005
- type: map_at_10
value: 41.778
- type: map_at_100
value: 42.911
- type: map_at_1000
value: 42.994
- type: map_at_3
value: 38.558
- type: map_at_5
value: 40.318
- type: mrr_at_1
value: 37.687
- type: mrr_at_10
value: 45.889
- type: mrr_at_100
value: 46.672999999999995
- type: mrr_at_1000
value: 46.72
- type: mrr_at_3
value: 43.33
- type: mrr_at_5
value: 44.734
- type: ndcg_at_1
value: 37.687
- type: ndcg_at_10
value: 47.258
- type: ndcg_at_100
value: 52.331
- type: ndcg_at_1000
value: 54.152
- type: ndcg_at_3
value: 41.857
- type: ndcg_at_5
value: 44.283
- type: precision_at_1
value: 37.687
- type: precision_at_10
value: 7.892
- type: precision_at_100
value: 1.183
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 18.781
- type: precision_at_5
value: 13.134
- type: recall_at_1
value: 32.123000000000005
- type: recall_at_10
value: 59.760000000000005
- type: recall_at_100
value: 81.652
- type: recall_at_1000
value: 94.401
- type: recall_at_3
value: 44.996
- type: recall_at_5
value: 51.184
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 33.196999999999996
- type: map_at_10
value: 42.012
- type: map_at_100
value: 43.663999999999994
- type: map_at_1000
value: 43.883
- type: map_at_3
value: 39.33
- type: map_at_5
value: 40.586
- type: mrr_at_1
value: 39.328
- type: mrr_at_10
value: 46.57
- type: mrr_at_100
value: 47.508
- type: mrr_at_1000
value: 47.558
- type: mrr_at_3
value: 44.532
- type: mrr_at_5
value: 45.58
- type: ndcg_at_1
value: 39.328
- type: ndcg_at_10
value: 47.337
- type: ndcg_at_100
value: 52.989
- type: ndcg_at_1000
value: 55.224
- type: ndcg_at_3
value: 43.362
- type: ndcg_at_5
value: 44.866
- type: precision_at_1
value: 39.328
- type: precision_at_10
value: 8.577
- type: precision_at_100
value: 1.5789999999999997
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 19.697
- type: precision_at_5
value: 13.755
- type: recall_at_1
value: 33.196999999999996
- type: recall_at_10
value: 56.635000000000005
- type: recall_at_100
value: 81.882
- type: recall_at_1000
value: 95.342
- type: recall_at_3
value: 44.969
- type: recall_at_5
value: 49.266
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 26.901000000000003
- type: map_at_10
value: 35.77
- type: map_at_100
value: 36.638999999999996
- type: map_at_1000
value: 36.741
- type: map_at_3
value: 33.219
- type: map_at_5
value: 34.574
- type: mrr_at_1
value: 29.205
- type: mrr_at_10
value: 37.848
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.682
- type: mrr_at_3
value: 35.551
- type: mrr_at_5
value: 36.808
- type: ndcg_at_1
value: 29.205
- type: ndcg_at_10
value: 40.589
- type: ndcg_at_100
value: 45.171
- type: ndcg_at_1000
value: 47.602
- type: ndcg_at_3
value: 35.760999999999996
- type: ndcg_at_5
value: 37.980000000000004
- type: precision_at_1
value: 29.205
- type: precision_at_10
value: 6.192
- type: precision_at_100
value: 0.922
- type: precision_at_1000
value: 0.123
- type: precision_at_3
value: 15.034
- type: precision_at_5
value: 10.424999999999999
- type: recall_at_1
value: 26.901000000000003
- type: recall_at_10
value: 53.236000000000004
- type: recall_at_100
value: 74.809
- type: recall_at_1000
value: 92.884
- type: recall_at_3
value: 40.314
- type: recall_at_5
value: 45.617999999999995
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 16.794999999999998
- type: map_at_10
value: 29.322
- type: map_at_100
value: 31.463
- type: map_at_1000
value: 31.643
- type: map_at_3
value: 24.517
- type: map_at_5
value: 27.237000000000002
- type: mrr_at_1
value: 37.655
- type: mrr_at_10
value: 50.952
- type: mrr_at_100
value: 51.581999999999994
- type: mrr_at_1000
value: 51.61
- type: mrr_at_3
value: 47.991
- type: mrr_at_5
value: 49.744
- type: ndcg_at_1
value: 37.655
- type: ndcg_at_10
value: 39.328
- type: ndcg_at_100
value: 46.358
- type: ndcg_at_1000
value: 49.245
- type: ndcg_at_3
value: 33.052
- type: ndcg_at_5
value: 35.407
- type: precision_at_1
value: 37.655
- type: precision_at_10
value: 12.202
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.252
- type: precision_at_3
value: 24.973
- type: precision_at_5
value: 19.075
- type: recall_at_1
value: 16.794999999999998
- type: recall_at_10
value: 45.716
- type: recall_at_100
value: 68.919
- type: recall_at_1000
value: 84.71600000000001
- type: recall_at_3
value: 30.135
- type: recall_at_5
value: 37.141999999999996
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.817
- type: map_at_10
value: 22.058
- type: map_at_100
value: 31.805
- type: map_at_1000
value: 33.562999999999995
- type: map_at_3
value: 15.537
- type: map_at_5
value: 18.199
- type: mrr_at_1
value: 72.75
- type: mrr_at_10
value: 79.804
- type: mrr_at_100
value: 80.089
- type: mrr_at_1000
value: 80.09100000000001
- type: mrr_at_3
value: 78.75
- type: mrr_at_5
value: 79.325
- type: ndcg_at_1
value: 59.875
- type: ndcg_at_10
value: 45.972
- type: ndcg_at_100
value: 51.092999999999996
- type: ndcg_at_1000
value: 58.048
- type: ndcg_at_3
value: 50.552
- type: ndcg_at_5
value: 47.672
- type: precision_at_1
value: 72.75
- type: precision_at_10
value: 37.05
- type: precision_at_100
value: 12.005
- type: precision_at_1000
value: 2.221
- type: precision_at_3
value: 54.083000000000006
- type: precision_at_5
value: 46.2
- type: recall_at_1
value: 9.817
- type: recall_at_10
value: 27.877000000000002
- type: recall_at_100
value: 57.974000000000004
- type: recall_at_1000
value: 80.085
- type: recall_at_3
value: 16.911
- type: recall_at_5
value: 20.689
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.464999999999996
- type: f1
value: 42.759588662873796
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 75.82900000000001
- type: map_at_10
value: 84.613
- type: map_at_100
value: 84.845
- type: map_at_1000
value: 84.855
- type: map_at_3
value: 83.498
- type: map_at_5
value: 84.29299999999999
- type: mrr_at_1
value: 81.69800000000001
- type: mrr_at_10
value: 88.84100000000001
- type: mrr_at_100
value: 88.887
- type: mrr_at_1000
value: 88.888
- type: mrr_at_3
value: 88.179
- type: mrr_at_5
value: 88.69200000000001
- type: ndcg_at_1
value: 81.69800000000001
- type: ndcg_at_10
value: 88.21799999999999
- type: ndcg_at_100
value: 88.961
- type: ndcg_at_1000
value: 89.131
- type: ndcg_at_3
value: 86.591
- type: ndcg_at_5
value: 87.666
- type: precision_at_1
value: 81.69800000000001
- type: precision_at_10
value: 10.615
- type: precision_at_100
value: 1.125
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.208
- type: precision_at_5
value: 20.681
- type: recall_at_1
value: 75.82900000000001
- type: recall_at_10
value: 94.97
- type: recall_at_100
value: 97.786
- type: recall_at_1000
value: 98.809
- type: recall_at_3
value: 90.625
- type: recall_at_5
value: 93.345
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 22.788
- type: map_at_10
value: 36.71
- type: map_at_100
value: 38.527
- type: map_at_1000
value: 38.701
- type: map_at_3
value: 32.318999999999996
- type: map_at_5
value: 34.809
- type: mrr_at_1
value: 44.444
- type: mrr_at_10
value: 52.868
- type: mrr_at_100
value: 53.52400000000001
- type: mrr_at_1000
value: 53.559999999999995
- type: mrr_at_3
value: 50.153999999999996
- type: mrr_at_5
value: 51.651
- type: ndcg_at_1
value: 44.444
- type: ndcg_at_10
value: 44.707
- type: ndcg_at_100
value: 51.174
- type: ndcg_at_1000
value: 53.996
- type: ndcg_at_3
value: 40.855999999999995
- type: ndcg_at_5
value: 42.113
- type: precision_at_1
value: 44.444
- type: precision_at_10
value: 12.021999999999998
- type: precision_at_100
value: 1.8950000000000002
- type: precision_at_1000
value: 0.241
- type: precision_at_3
value: 26.8
- type: precision_at_5
value: 19.66
- type: recall_at_1
value: 22.788
- type: recall_at_10
value: 51.793
- type: recall_at_100
value: 75.69500000000001
- type: recall_at_1000
value: 92.292
- type: recall_at_3
value: 37.375
- type: recall_at_5
value: 43.682
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 41.276
- type: map_at_10
value: 67.245
- type: map_at_100
value: 68.061
- type: map_at_1000
value: 68.11399999999999
- type: map_at_3
value: 63.693
- type: map_at_5
value: 65.90899999999999
- type: mrr_at_1
value: 82.552
- type: mrr_at_10
value: 87.741
- type: mrr_at_100
value: 87.868
- type: mrr_at_1000
value: 87.871
- type: mrr_at_3
value: 86.98599999999999
- type: mrr_at_5
value: 87.469
- type: ndcg_at_1
value: 82.552
- type: ndcg_at_10
value: 75.176
- type: ndcg_at_100
value: 77.902
- type: ndcg_at_1000
value: 78.852
- type: ndcg_at_3
value: 70.30499999999999
- type: ndcg_at_5
value: 73.00999999999999
- type: precision_at_1
value: 82.552
- type: precision_at_10
value: 15.765
- type: precision_at_100
value: 1.788
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 45.375
- type: precision_at_5
value: 29.360999999999997
- type: recall_at_1
value: 41.276
- type: recall_at_10
value: 78.825
- type: recall_at_100
value: 89.41900000000001
- type: recall_at_1000
value: 95.625
- type: recall_at_3
value: 68.062
- type: recall_at_5
value: 73.40299999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 72.876
- type: ap
value: 67.15477852410164
- type: f1
value: 72.65147370025373
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 21.748
- type: map_at_10
value: 34.626000000000005
- type: map_at_100
value: 35.813
- type: map_at_1000
value: 35.859
- type: map_at_3
value: 30.753000000000004
- type: map_at_5
value: 33.049
- type: mrr_at_1
value: 22.35
- type: mrr_at_10
value: 35.23
- type: mrr_at_100
value: 36.359
- type: mrr_at_1000
value: 36.399
- type: mrr_at_3
value: 31.436999999999998
- type: mrr_at_5
value: 33.687
- type: ndcg_at_1
value: 22.364
- type: ndcg_at_10
value: 41.677
- type: ndcg_at_100
value: 47.355999999999995
- type: ndcg_at_1000
value: 48.494
- type: ndcg_at_3
value: 33.85
- type: ndcg_at_5
value: 37.942
- type: precision_at_1
value: 22.364
- type: precision_at_10
value: 6.6000000000000005
- type: precision_at_100
value: 0.9450000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.527000000000001
- type: precision_at_5
value: 10.796999999999999
- type: recall_at_1
value: 21.748
- type: recall_at_10
value: 63.292
- type: recall_at_100
value: 89.427
- type: recall_at_1000
value: 98.13499999999999
- type: recall_at_3
value: 42.126000000000005
- type: recall_at_5
value: 51.968
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.62425900592795
- type: f1
value: 92.08497761553683
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 64.51436388508893
- type: f1
value: 45.884016531912906
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (eng)
type: masakhane/masakhanews
config: eng
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 76.57172995780591
- type: f1
value: 75.52979910878491
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (eng)
type: masakhane/masakhanews
config: eng
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 44.84052695201612
- type: v_measure
value: 21.443971229936494
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.79354404841965
- type: f1
value: 63.17260074126185
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.09616677874916
- type: f1
value: 69.74285784421075
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.474709231086184
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.93630367824217
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.08234393834005
- type: mrr
value: 29.740466971605432
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.2059999999999995
- type: map_at_10
value: 14.442
- type: map_at_100
value: 18.005
- type: map_at_1000
value: 19.488
- type: map_at_3
value: 10.666
- type: map_at_5
value: 12.45
- type: mrr_at_1
value: 47.678
- type: mrr_at_10
value: 57.519
- type: mrr_at_100
value: 58.13700000000001
- type: mrr_at_1000
value: 58.167
- type: mrr_at_3
value: 55.779
- type: mrr_at_5
value: 56.940000000000005
- type: ndcg_at_1
value: 45.82
- type: ndcg_at_10
value: 37.651
- type: ndcg_at_100
value: 34.001999999999995
- type: ndcg_at_1000
value: 42.626
- type: ndcg_at_3
value: 43.961
- type: ndcg_at_5
value: 41.461
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 27.584999999999997
- type: precision_at_100
value: 8.455
- type: precision_at_1000
value: 2.118
- type: precision_at_3
value: 41.692
- type: precision_at_5
value: 36.161
- type: recall_at_1
value: 6.2059999999999995
- type: recall_at_10
value: 18.599
- type: recall_at_100
value: 33.608
- type: recall_at_1000
value: 65.429
- type: recall_at_3
value: 12.126000000000001
- type: recall_at_5
value: 14.902000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.117000000000004
- type: map_at_10
value: 55.535000000000004
- type: map_at_100
value: 56.32899999999999
- type: map_at_1000
value: 56.34400000000001
- type: map_at_3
value: 51.439
- type: map_at_5
value: 53.89699999999999
- type: mrr_at_1
value: 43.714
- type: mrr_at_10
value: 58.05200000000001
- type: mrr_at_100
value: 58.582
- type: mrr_at_1000
value: 58.592
- type: mrr_at_3
value: 54.896
- type: mrr_at_5
value: 56.874
- type: ndcg_at_1
value: 43.685
- type: ndcg_at_10
value: 63.108
- type: ndcg_at_100
value: 66.231
- type: ndcg_at_1000
value: 66.583
- type: ndcg_at_3
value: 55.659000000000006
- type: ndcg_at_5
value: 59.681
- type: precision_at_1
value: 43.685
- type: precision_at_10
value: 9.962
- type: precision_at_100
value: 1.174
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 24.961
- type: precision_at_5
value: 17.352
- type: recall_at_1
value: 39.117000000000004
- type: recall_at_10
value: 83.408
- type: recall_at_100
value: 96.553
- type: recall_at_1000
value: 99.136
- type: recall_at_3
value: 64.364
- type: recall_at_5
value: 73.573
- task:
type: Classification
dataset:
name: MTEB NewsClassification
type: ag_news
config: default
split: test
revision: eb185aade064a813bc0b7f42de02595523103ca4
metrics:
- type: accuracy
value: 78.87763157894737
- type: f1
value: 78.69611753876177
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (en)
type: GEM/opusparcus
config: en
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.89816700610999
- type: cos_sim_ap
value: 100
- type: cos_sim_f1
value: 99.9490575649516
- type: cos_sim_precision
value: 100
- type: cos_sim_recall
value: 99.89816700610999
- type: dot_accuracy
value: 99.89816700610999
- type: dot_ap
value: 100
- type: dot_f1
value: 99.9490575649516
- type: dot_precision
value: 100
- type: dot_recall
value: 99.89816700610999
- type: euclidean_accuracy
value: 99.89816700610999
- type: euclidean_ap
value: 100
- type: euclidean_f1
value: 99.9490575649516
- type: euclidean_precision
value: 100
- type: euclidean_recall
value: 99.89816700610999
- type: manhattan_accuracy
value: 99.89816700610999
- type: manhattan_ap
value: 100
- type: manhattan_f1
value: 99.9490575649516
- type: manhattan_precision
value: 100
- type: manhattan_recall
value: 99.89816700610999
- type: max_accuracy
value: 99.89816700610999
- type: max_ap
value: 100
- type: max_f1
value: 99.9490575649516
- task:
type: PairClassification
dataset:
name: MTEB PawsX (en)
type: paws-x
config: en
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 62
- type: cos_sim_ap
value: 62.26837791655737
- type: cos_sim_f1
value: 62.607449856733524
- type: cos_sim_precision
value: 46.36604774535809
- type: cos_sim_recall
value: 96.36163175303197
- type: dot_accuracy
value: 62
- type: dot_ap
value: 62.26736459439965
- type: dot_f1
value: 62.607449856733524
- type: dot_precision
value: 46.36604774535809
- type: dot_recall
value: 96.36163175303197
- type: euclidean_accuracy
value: 62
- type: euclidean_ap
value: 62.26826112548132
- type: euclidean_f1
value: 62.607449856733524
- type: euclidean_precision
value: 46.36604774535809
- type: euclidean_recall
value: 96.36163175303197
- type: manhattan_accuracy
value: 62
- type: manhattan_ap
value: 62.26223761507973
- type: manhattan_f1
value: 62.585034013605444
- type: manhattan_precision
value: 46.34146341463415
- type: manhattan_recall
value: 96.36163175303197
- type: max_accuracy
value: 62
- type: max_ap
value: 62.26837791655737
- type: max_f1
value: 62.607449856733524
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 69.90899999999999
- type: map_at_10
value: 83.56700000000001
- type: map_at_100
value: 84.19200000000001
- type: map_at_1000
value: 84.212
- type: map_at_3
value: 80.658
- type: map_at_5
value: 82.473
- type: mrr_at_1
value: 80.4
- type: mrr_at_10
value: 86.699
- type: mrr_at_100
value: 86.798
- type: mrr_at_1000
value: 86.80099999999999
- type: mrr_at_3
value: 85.677
- type: mrr_at_5
value: 86.354
- type: ndcg_at_1
value: 80.43
- type: ndcg_at_10
value: 87.41
- type: ndcg_at_100
value: 88.653
- type: ndcg_at_1000
value: 88.81599999999999
- type: ndcg_at_3
value: 84.516
- type: ndcg_at_5
value: 86.068
- type: precision_at_1
value: 80.43
- type: precision_at_10
value: 13.234000000000002
- type: precision_at_100
value: 1.513
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.93
- type: precision_at_5
value: 24.26
- type: recall_at_1
value: 69.90899999999999
- type: recall_at_10
value: 94.687
- type: recall_at_100
value: 98.96000000000001
- type: recall_at_1000
value: 99.79599999999999
- type: recall_at_3
value: 86.25699999999999
- type: recall_at_5
value: 90.70700000000001
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.02256865360266
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 62.43157528757563
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 5.093
- type: map_at_10
value: 12.982
- type: map_at_100
value: 15.031
- type: map_at_1000
value: 15.334
- type: map_at_3
value: 9.339
- type: map_at_5
value: 11.183
- type: mrr_at_1
value: 25.1
- type: mrr_at_10
value: 36.257
- type: mrr_at_100
value: 37.351
- type: mrr_at_1000
value: 37.409
- type: mrr_at_3
value: 33.050000000000004
- type: mrr_at_5
value: 35.205
- type: ndcg_at_1
value: 25.1
- type: ndcg_at_10
value: 21.361
- type: ndcg_at_100
value: 29.396
- type: ndcg_at_1000
value: 34.849999999999994
- type: ndcg_at_3
value: 20.704
- type: ndcg_at_5
value: 18.086
- type: precision_at_1
value: 25.1
- type: precision_at_10
value: 10.94
- type: precision_at_100
value: 2.257
- type: precision_at_1000
value: 0.358
- type: precision_at_3
value: 19.467000000000002
- type: precision_at_5
value: 15.98
- type: recall_at_1
value: 5.093
- type: recall_at_10
value: 22.177
- type: recall_at_100
value: 45.842
- type: recall_at_1000
value: 72.598
- type: recall_at_3
value: 11.833
- type: recall_at_5
value: 16.173000000000002
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 73.56535226754596
- type: cos_sim_spearman
value: 69.32425977603488
- type: euclidean_pearson
value: 71.32425703470898
- type: euclidean_spearman
value: 69.32425217267013
- type: manhattan_pearson
value: 71.25897281394246
- type: manhattan_spearman
value: 69.27132577049578
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 69.66387868726018
- type: cos_sim_spearman
value: 67.85470749045027
- type: euclidean_pearson
value: 66.62075098063795
- type: euclidean_spearman
value: 67.85470749045027
- type: manhattan_pearson
value: 66.61455061901262
- type: manhattan_spearman
value: 67.87229618498695
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 75.65731331392575
- type: cos_sim_spearman
value: 77.48991626780108
- type: euclidean_pearson
value: 77.19884738623692
- type: euclidean_spearman
value: 77.48985836619045
- type: manhattan_pearson
value: 77.0656684243772
- type: manhattan_spearman
value: 77.30289226582691
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 69.37003253666457
- type: cos_sim_spearman
value: 69.77157648098141
- type: euclidean_pearson
value: 69.39543876030432
- type: euclidean_spearman
value: 69.77157648098141
- type: manhattan_pearson
value: 69.29901600459745
- type: manhattan_spearman
value: 69.65074167527128
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 78.56777256540136
- type: cos_sim_spearman
value: 80.16458787843023
- type: euclidean_pearson
value: 80.16475730686916
- type: euclidean_spearman
value: 80.16458787843023
- type: manhattan_pearson
value: 80.12814463670401
- type: manhattan_spearman
value: 80.1357907984809
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 76.09572350919031
- type: cos_sim_spearman
value: 77.94490233429326
- type: euclidean_pearson
value: 78.36595251203524
- type: euclidean_spearman
value: 77.94490233429326
- type: manhattan_pearson
value: 78.41538768125166
- type: manhattan_spearman
value: 78.01244379569542
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.7843552187951
- type: cos_sim_spearman
value: 82.28085055047386
- type: euclidean_pearson
value: 82.37373672515267
- type: euclidean_spearman
value: 82.28085055047386
- type: manhattan_pearson
value: 82.39387241346917
- type: manhattan_spearman
value: 82.36503339515906
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 68.29963929962095
- type: cos_sim_spearman
value: 67.96868942546051
- type: euclidean_pearson
value: 68.93524903869285
- type: euclidean_spearman
value: 67.96868942546051
- type: manhattan_pearson
value: 68.79144468444811
- type: manhattan_spearman
value: 67.69311483884324
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 72.84789696700685
- type: cos_sim_spearman
value: 75.67875747588545
- type: euclidean_pearson
value: 75.07752300463038
- type: euclidean_spearman
value: 75.67875747588545
- type: manhattan_pearson
value: 74.97934248140928
- type: manhattan_spearman
value: 75.62525644178724
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (en)
type: PhilipMay/stsb_multi_mt
config: en
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 72.84789702519309
- type: cos_sim_spearman
value: 75.67875747588545
- type: euclidean_pearson
value: 75.07752310061133
- type: euclidean_spearman
value: 75.67875747588545
- type: manhattan_pearson
value: 74.97934257159595
- type: manhattan_spearman
value: 75.62525644178724
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 81.55557720431086
- type: mrr
value: 94.91178665198272
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 59.260999999999996
- type: map_at_10
value: 69.36099999999999
- type: map_at_100
value: 69.868
- type: map_at_1000
value: 69.877
- type: map_at_3
value: 66.617
- type: map_at_5
value: 68.061
- type: mrr_at_1
value: 62.333000000000006
- type: mrr_at_10
value: 70.533
- type: mrr_at_100
value: 70.966
- type: mrr_at_1000
value: 70.975
- type: mrr_at_3
value: 68.667
- type: mrr_at_5
value: 69.717
- type: ndcg_at_1
value: 62.333000000000006
- type: ndcg_at_10
value: 73.82300000000001
- type: ndcg_at_100
value: 76.122
- type: ndcg_at_1000
value: 76.374
- type: ndcg_at_3
value: 69.27499999999999
- type: ndcg_at_5
value: 71.33
- type: precision_at_1
value: 62.333000000000006
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.889000000000003
- type: precision_at_5
value: 17.599999999999998
- type: recall_at_1
value: 59.260999999999996
- type: recall_at_10
value: 86.2
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 98.667
- type: recall_at_3
value: 74.006
- type: recall_at_5
value: 79.167
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81881188118813
- type: cos_sim_ap
value: 95.20169041096409
- type: cos_sim_f1
value: 90.76224129227664
- type: cos_sim_precision
value: 91.64118246687055
- type: cos_sim_recall
value: 89.9
- type: dot_accuracy
value: 99.81881188118813
- type: dot_ap
value: 95.20169041096409
- type: dot_f1
value: 90.76224129227664
- type: dot_precision
value: 91.64118246687055
- type: dot_recall
value: 89.9
- type: euclidean_accuracy
value: 99.81881188118813
- type: euclidean_ap
value: 95.2016904109641
- type: euclidean_f1
value: 90.76224129227664
- type: euclidean_precision
value: 91.64118246687055
- type: euclidean_recall
value: 89.9
- type: manhattan_accuracy
value: 99.81881188118813
- type: manhattan_ap
value: 95.22680188132777
- type: manhattan_f1
value: 90.79013588324108
- type: manhattan_precision
value: 91.38804457953394
- type: manhattan_recall
value: 90.2
- type: max_accuracy
value: 99.81881188118813
- type: max_ap
value: 95.22680188132777
- type: max_f1
value: 90.79013588324108
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.8638628701308
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 37.82028248106046
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.870860210170946
- type: mrr
value: 51.608084521687466
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.60384207444685
- type: cos_sim_spearman
value: 30.84047452209471
- type: dot_pearson
value: 31.60384104417333
- type: dot_spearman
value: 30.84047452209471
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.246
- type: map_at_10
value: 2.051
- type: map_at_100
value: 13.129
- type: map_at_1000
value: 31.56
- type: map_at_3
value: 0.681
- type: map_at_5
value: 1.105
- type: mrr_at_1
value: 94
- type: mrr_at_10
value: 97
- type: mrr_at_100
value: 97
- type: mrr_at_1000
value: 97
- type: mrr_at_3
value: 97
- type: mrr_at_5
value: 97
- type: ndcg_at_1
value: 87
- type: ndcg_at_10
value: 80.716
- type: ndcg_at_100
value: 63.83
- type: ndcg_at_1000
value: 56.215
- type: ndcg_at_3
value: 84.531
- type: ndcg_at_5
value: 84.777
- type: precision_at_1
value: 94
- type: precision_at_10
value: 84.6
- type: precision_at_100
value: 66.03999999999999
- type: precision_at_1000
value: 24.878
- type: precision_at_3
value: 88.667
- type: precision_at_5
value: 89.60000000000001
- type: recall_at_1
value: 0.246
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 15.895999999999999
- type: recall_at_1000
value: 52.683
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.163
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 3.852
- type: map_at_10
value: 14.316
- type: map_at_100
value: 20.982
- type: map_at_1000
value: 22.58
- type: map_at_3
value: 7.767
- type: map_at_5
value: 10.321
- type: mrr_at_1
value: 51.019999999999996
- type: mrr_at_10
value: 66.365
- type: mrr_at_100
value: 66.522
- type: mrr_at_1000
value: 66.522
- type: mrr_at_3
value: 62.925
- type: mrr_at_5
value: 64.762
- type: ndcg_at_1
value: 46.939
- type: ndcg_at_10
value: 34.516999999999996
- type: ndcg_at_100
value: 44.25
- type: ndcg_at_1000
value: 54.899
- type: ndcg_at_3
value: 40.203
- type: ndcg_at_5
value: 37.004
- type: precision_at_1
value: 51.019999999999996
- type: precision_at_10
value: 29.796
- type: precision_at_100
value: 8.633000000000001
- type: precision_at_1000
value: 1.584
- type: precision_at_3
value: 40.816
- type: precision_at_5
value: 35.918
- type: recall_at_1
value: 3.852
- type: recall_at_10
value: 20.891000000000002
- type: recall_at_100
value: 52.428
- type: recall_at_1000
value: 84.34899999999999
- type: recall_at_3
value: 8.834
- type: recall_at_5
value: 12.909
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 64.7092
- type: ap
value: 11.972915012305819
- type: f1
value: 49.91050149892115
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 56.737408036219584
- type: f1
value: 57.07235266246011
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 35.9147539025798
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 82.52369315133814
- type: cos_sim_ap
value: 62.34858091376534
- type: cos_sim_f1
value: 58.18225190839694
- type: cos_sim_precision
value: 53.09098824553766
- type: cos_sim_recall
value: 64.35356200527704
- type: dot_accuracy
value: 82.52369315133814
- type: dot_ap
value: 62.34857753814992
- type: dot_f1
value: 58.18225190839694
- type: dot_precision
value: 53.09098824553766
- type: dot_recall
value: 64.35356200527704
- type: euclidean_accuracy
value: 82.52369315133814
- type: euclidean_ap
value: 62.34857756663386
- type: euclidean_f1
value: 58.18225190839694
- type: euclidean_precision
value: 53.09098824553766
- type: euclidean_recall
value: 64.35356200527704
- type: manhattan_accuracy
value: 82.49389044525243
- type: manhattan_ap
value: 62.32245347238179
- type: manhattan_f1
value: 58.206309819213054
- type: manhattan_precision
value: 52.70704044511021
- type: manhattan_recall
value: 64.9868073878628
- type: max_accuracy
value: 82.52369315133814
- type: max_ap
value: 62.34858091376534
- type: max_f1
value: 58.206309819213054
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.34555827220863
- type: cos_sim_ap
value: 84.84152481680071
- type: cos_sim_f1
value: 76.860456739428
- type: cos_sim_precision
value: 72.21470150263978
- type: cos_sim_recall
value: 82.14505697566985
- type: dot_accuracy
value: 88.34555827220863
- type: dot_ap
value: 84.84152743322608
- type: dot_f1
value: 76.860456739428
- type: dot_precision
value: 72.21470150263978
- type: dot_recall
value: 82.14505697566985
- type: euclidean_accuracy
value: 88.34555827220863
- type: euclidean_ap
value: 84.84152589453169
- type: euclidean_f1
value: 76.860456739428
- type: euclidean_precision
value: 72.21470150263978
- type: euclidean_recall
value: 82.14505697566985
- type: manhattan_accuracy
value: 88.38242713548337
- type: manhattan_ap
value: 84.8112124970968
- type: manhattan_f1
value: 76.83599206057487
- type: manhattan_precision
value: 73.51244900829934
- type: manhattan_recall
value: 80.47428395441946
- type: max_accuracy
value: 88.38242713548337
- type: max_ap
value: 84.84152743322608
- type: max_f1
value: 76.860456739428
- task:
type: Clustering
dataset:
name: MTEB WikiCitiesClustering
type: jinaai/cities_wiki_clustering
config: default
split: test
revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa
metrics:
- type: v_measure
value: 85.5314389263015
---
<h1 align="center">Snowflake's Arctic-embed-l</h1>
<h4 align="center">
<p>
<a href=#news>News</a> |
<a href=#models>Models</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#contact">Contact</a> |
<a href="#faq">FAQ</a>
<a href="#license">License</a> |
<a href="#acknowledgement">Acknowledgement</a>
<p>
</h4>
## News
12/04/2024: Release of [snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) and [snowflake-arctic-embed-m-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v2.0) our newest models with multilingual workloads in mind. These models outperform prior versions of Arctic Embed and we suggest these replace prior versions!
07/26/2024: Release preprint [[2407.18887] Embedding And Clustering Your Data Can Improve Contrastive Pretraining](https://arxiv.org/abs/2407.18887) on arXiv.
07/18/2024: Release of `snowflake-arctic-embed-m-v1.5`, capable of producing highly compressible embedding vectors that preserve quality even when squished as small as 128 bytes per vector. Details about the development of this model are available in the [launch post on the Snowflake engineering blog](https://www.snowflake.com/engineering-blog/arctic-embed-m-v1-5-enterprise-retrieval/).
05/10/2024: Release the [technical report on Arctic Embed](https://arxiv.org/abs/2405.05374)
04/16/2024: Release the ** snowflake-arctic-embed ** family of text embedding models. The releases are state-of-the-art for Retrieval quality at each of their representative size profiles. [Technical Report]() is coming shortly. For more details, please refer to our Github: [Arctic-Text-Embed](https://github.com/Snowflake-Labs/arctic-embed).
## Models
snowflake-arctic-embed is a suite of text embedding models that focuses on creating high-quality retrieval models optimized for performance.
The `snowflake-arctic-embedding` models achieve **state-of-the-art performance on the MTEB/BEIR leaderboard** for each of their size variants. Evaluation is performed using these [scripts](https://github.com/Snowflake-Labs/snowflake-arctic-embed/tree/main/src). As shown below, each class of model size achieves SOTA retrieval accuracy compared to other top models.
The models are trained by leveraging existing open-source text representation models, such as bert-base-uncased, and are trained in a multi-stage pipeline to optimize their retrieval performance. First, the models are trained with large batches of query-document pairs where negatives are derived in-batch—pretraining leverages about 400m samples of a mix of public datasets and proprietary web search data. Following pretraining models are further optimized with long training on a smaller dataset (about 1m samples) of triplets of query, positive document, and negative document derived from hard harmful mining. Mining of the negatives and data curation is crucial to retrieval accuracy. A detailed technical report can be found [here](https://arxiv.org/abs/2405.05374).
| Name | MTEB Retrieval Score (NDCG @ 10) | Parameters (Millions) | Embedding Dimension |
| ----------------------------------------------------------------------- | -------------------------------- | --------------------- | ------------------- |
| [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs/) | 50.15 | 22 | 384 |
| [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s/) | 51.98 | 33 | 384 |
| [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.90 | 110 | 768 |
| [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) | 54.83 | 137 | 768 |
| [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 | 335 | 1024 |
Aside from being great open-source models, the largest model, [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/), can serve as a natural replacement for closed-source embedding, as shown below.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 |
| Google-gecko-text-embedding | 55.7 |
| text-embedding-3-large | 55.44 |
| Cohere-embed-english-v3.0 | 55.00 |
| bge-large-en-v1.5 | 54.29 |
### [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs)
This tiny model packs quite the punch. Based on the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model with only 22m parameters and 384 dimensions, this model should meet even the strictest latency/TCO budgets. Despite its size, its retrieval accuracy is closer to that of models with 100m paramers.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------- | -------------------------------- |
| [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs/) | 50.15 |
| GIST-all-MiniLM-L6-v2 | 45.12 |
| gte-tiny | 44.92 |
| all-MiniLM-L6-v2 | 41.95 |
| bge-micro-v2 | 42.56 |
### [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s)
Based on the [intfloat/e5-small-unsupervised](https://huggingface.co/intfloat/e5-small-unsupervised) model, this small model does not trade off retrieval accuracy for its small size. With only 33m parameters and 384 dimensions, this model should easily allow scaling to large datasets.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s/) | 51.98 |
| bge-small-en-v1.5 | 51.68 |
| Cohere-embed-english-light-v3.0 | 51.34 |
| text-embedding-3-small | 51.08 |
| e5-small-v2 | 49.04 |
### [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/)
Based on the [intfloat/e5-base-unsupervised](https://huggingface.co/intfloat/e5-base-unsupervised) model, this medium model is the workhorse that provides the best retrieval performance without slowing down inference.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.90 |
| bge-base-en-v1.5 | 53.25 |
| nomic-embed-text-v1.5 | 53.25 |
| GIST-Embedding-v0 | 52.31 |
| gte-base | 52.31 |
### [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/)
Based on the [nomic-ai/nomic-embed-text-v1-unsupervised](https://huggingface.co/nomic-ai/nomic-embed-text-v1-unsupervised) model, this long-context variant of our medium-sized model is perfect for workloads that can be constrained by the regular 512 token context of our other models. Without the use of RPE, this model supports up to 2048 tokens. With RPE, it can scale to 8192!
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) | 54.83 |
| nomic-embed-text-v1.5 | 53.01 |
| nomic-embed-text-v1 | 52.81 |
### [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/)
Based on the [intfloat/e5-large-unsupervised](https://huggingface.co/intfloat/e5-large-unsupervised) model, this large model is a direct drop-in for closed APIs and delivers the most accurate retrieval experience.
| Model Name | MTEB Retrieval Score (NDCG @ 10) |
| ------------------------------------------------------------------ | -------------------------------- |
| [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 |
| UAE-Large-V1 | 54.66 |
| bge-large-en-v1.5 | 54.29 |
| mxbai-embed-large-v1 | 54.39 |
| e5-Large-v2 | 50.56 |
## Usage
### Using Sentence Transformers
You can use the sentence-transformers package to use an snowflake-arctic-embed model, as shown below.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Snowflake/snowflake-arctic-embed-l")
queries = ['what is snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City of Course!']
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = query_embeddings @ document_embeddings.T
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
# Output passages & scores
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
```
```
Query: what is snowflake?
0.28976774 The Data Cloud!
0.19071159 Mexico City of Course!
Query: Where can I get the best tacos?
0.38650584 Mexico City of Course!
0.25145516 The Data Cloud!
```
### Using Huggingface transformers
You can use the transformers package to use an snowflake-arctic-embed model, as shown below. For optimal retrieval quality, use the CLS token to embed each text portion and use the query prefix below (just on the query).
```python
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('Snowflake/snowflake-arctic-embed-l')
model = AutoModel.from_pretrained('Snowflake/snowflake-arctic-embed-l', add_pooling_layer=False)
model.eval()
query_prefix = 'Represent this sentence for searching relevant passages: '
queries = ['what is snowflake?', 'Where can I get the best tacos?']
queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
documents = ['The Data Cloud!', 'Mexico City of Course!']
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512)
# Compute token embeddings
with torch.no_grad():
query_embeddings = model(**query_tokens)[0][:, 0]
document_embeddings = model(**document_tokens)[0][:, 0]
# normalize embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1)
scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1))
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
```
### Using Transformers.js
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) by running:
```bash
npm i @xenova/transformers
```
You can then use the model to compute embeddings as follows:
```js
import { pipeline, dot } from '@xenova/transformers';
// Create feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-l', {
quantized: false, // Comment out this line to use the quantized version
});
// Generate sentence embeddings
const sentences = [
'Represent this sentence for searching relevant passages: Where can I get the best tacos?',
'The Data Cloud!',
'Mexico City of Course!',
]
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });
// Compute similarity scores
const [source_embeddings, ...document_embeddings ] = output.tolist();
const similarities = document_embeddings.map(x => dot(source_embeddings, x));
console.log(similarities); // [0.25145517380846977, 0.3865060421197194]
```
## Using Infinity
OpenAI compatible API deployment with [Infinity](https://github.com/michaelfeil/infinity) and Docker.
```bash
docker run --gpus all -v $PWD/data:/app/.cache -p "7997":"7997" \
michaelf34/infinity:0.0.70 \
v2 --model-id Snowflake/snowflake-arctic-embed-l --dtype float16 --batch-size 32 --engine torch --port 7997
```
## FAQ
TBD
## Contact
Feel free to open an issue or pull request if you have any questions or suggestions about this project.
You also can email Daniel Campos([email protected]).
## License
Arctic is licensed under the [Apache-2](https://www.apache.org/licenses/LICENSE-2.0). The released models can be used for commercial purposes free of charge.
## Acknowledgement
We want to thank the open-source community, which has provided the great building blocks upon which we could make our models.
We thank our modeling engineers, Danmei Xu, Luke Merrick, Gaurav Nuti, and Daniel Campos, for making these great models possible.
We thank our leadership, Himabindu Pucha, Kelvin So, Vivek Raghunathan, and Sridhar Ramaswamy, for supporting this work.
We also thank the open-source community for producing the great models we could build on top of and making these releases possible.
Finally, we thank the researchers who created BEIR and MTEB benchmarks.
It is largely thanks to their tireless work to define what better looks like that we could improve model performance.
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=d6741f66-9018-401c-8805-d79c74fb98ff" /> | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
RichardErkhov/EleutherAI_-_pythia-1.4b-4bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,713 | 1,713 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-1.4b - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-1.4b/
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1.4B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1.4B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1.4B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1.4B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1.4B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1.4B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1.4B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1.4B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1.4B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] | Non_BioNLP |
michaelfeil/ct2fast-e5-large-v2 | michaelfeil | sentence-similarity | [
"sentence-transformers",
"bert",
"ctranslate2",
"int8",
"float16",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"en",
"arxiv:2212.03533",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,686 | 1,697 | 13 | 3 | ---
language:
- en
license: mit
tags:
- ctranslate2
- int8
- float16
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: e5-large-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.22388059701493
- type: ap
value: 43.20816505595132
- type: f1
value: 73.27811303522058
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.748325
- type: ap
value: 90.72534979701297
- type: f1
value: 93.73895874282185
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.612
- type: f1
value: 47.61157345898393
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.541999999999998
- type: map_at_10
value: 38.208
- type: map_at_100
value: 39.417
- type: map_at_1000
value: 39.428999999999995
- type: map_at_3
value: 33.95
- type: map_at_5
value: 36.329
- type: mrr_at_1
value: 23.755000000000003
- type: mrr_at_10
value: 38.288
- type: mrr_at_100
value: 39.511
- type: mrr_at_1000
value: 39.523
- type: mrr_at_3
value: 34.009
- type: mrr_at_5
value: 36.434
- type: ndcg_at_1
value: 23.541999999999998
- type: ndcg_at_10
value: 46.417
- type: ndcg_at_100
value: 51.812000000000005
- type: ndcg_at_1000
value: 52.137
- type: ndcg_at_3
value: 37.528
- type: ndcg_at_5
value: 41.81
- type: precision_at_1
value: 23.541999999999998
- type: precision_at_10
value: 7.269
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 15.979
- type: precision_at_5
value: 11.664
- type: recall_at_1
value: 23.541999999999998
- type: recall_at_10
value: 72.688
- type: recall_at_100
value: 96.871
- type: recall_at_1000
value: 99.431
- type: recall_at_3
value: 47.937000000000005
- type: recall_at_5
value: 58.321
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.546499570522094
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.01607489943561
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 59.616107510107774
- type: mrr
value: 72.75106626214661
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.33018094733868
- type: cos_sim_spearman
value: 83.60190492611737
- type: euclidean_pearson
value: 82.1492450218961
- type: euclidean_spearman
value: 82.70308926526991
- type: manhattan_pearson
value: 81.93959600076842
- type: manhattan_spearman
value: 82.73260801016369
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.54545454545455
- type: f1
value: 84.49582530928923
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.362725540120096
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 34.849509608178145
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.502999999999997
- type: map_at_10
value: 43.323
- type: map_at_100
value: 44.708999999999996
- type: map_at_1000
value: 44.838
- type: map_at_3
value: 38.987
- type: map_at_5
value: 41.516999999999996
- type: mrr_at_1
value: 38.769999999999996
- type: mrr_at_10
value: 49.13
- type: mrr_at_100
value: 49.697
- type: mrr_at_1000
value: 49.741
- type: mrr_at_3
value: 45.804
- type: mrr_at_5
value: 47.842
- type: ndcg_at_1
value: 38.769999999999996
- type: ndcg_at_10
value: 50.266999999999996
- type: ndcg_at_100
value: 54.967
- type: ndcg_at_1000
value: 56.976000000000006
- type: ndcg_at_3
value: 43.823
- type: ndcg_at_5
value: 47.12
- type: precision_at_1
value: 38.769999999999996
- type: precision_at_10
value: 10.057
- type: precision_at_100
value: 1.554
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 21.125
- type: precision_at_5
value: 15.851
- type: recall_at_1
value: 31.502999999999997
- type: recall_at_10
value: 63.715999999999994
- type: recall_at_100
value: 83.61800000000001
- type: recall_at_1000
value: 96.63199999999999
- type: recall_at_3
value: 45.403
- type: recall_at_5
value: 54.481
- type: map_at_1
value: 27.833000000000002
- type: map_at_10
value: 37.330999999999996
- type: map_at_100
value: 38.580999999999996
- type: map_at_1000
value: 38.708
- type: map_at_3
value: 34.713
- type: map_at_5
value: 36.104
- type: mrr_at_1
value: 35.223
- type: mrr_at_10
value: 43.419000000000004
- type: mrr_at_100
value: 44.198
- type: mrr_at_1000
value: 44.249
- type: mrr_at_3
value: 41.614000000000004
- type: mrr_at_5
value: 42.553000000000004
- type: ndcg_at_1
value: 35.223
- type: ndcg_at_10
value: 42.687999999999995
- type: ndcg_at_100
value: 47.447
- type: ndcg_at_1000
value: 49.701
- type: ndcg_at_3
value: 39.162
- type: ndcg_at_5
value: 40.557
- type: precision_at_1
value: 35.223
- type: precision_at_10
value: 7.962
- type: precision_at_100
value: 1.304
- type: precision_at_1000
value: 0.18
- type: precision_at_3
value: 19.023
- type: precision_at_5
value: 13.184999999999999
- type: recall_at_1
value: 27.833000000000002
- type: recall_at_10
value: 51.881
- type: recall_at_100
value: 72.04
- type: recall_at_1000
value: 86.644
- type: recall_at_3
value: 40.778
- type: recall_at_5
value: 45.176
- type: map_at_1
value: 38.175
- type: map_at_10
value: 51.174
- type: map_at_100
value: 52.26499999999999
- type: map_at_1000
value: 52.315999999999995
- type: map_at_3
value: 47.897
- type: map_at_5
value: 49.703
- type: mrr_at_1
value: 43.448
- type: mrr_at_10
value: 54.505
- type: mrr_at_100
value: 55.216
- type: mrr_at_1000
value: 55.242000000000004
- type: mrr_at_3
value: 51.98500000000001
- type: mrr_at_5
value: 53.434000000000005
- type: ndcg_at_1
value: 43.448
- type: ndcg_at_10
value: 57.282
- type: ndcg_at_100
value: 61.537
- type: ndcg_at_1000
value: 62.546
- type: ndcg_at_3
value: 51.73799999999999
- type: ndcg_at_5
value: 54.324
- type: precision_at_1
value: 43.448
- type: precision_at_10
value: 9.292
- type: precision_at_100
value: 1.233
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.218
- type: precision_at_5
value: 15.887
- type: recall_at_1
value: 38.175
- type: recall_at_10
value: 72.00999999999999
- type: recall_at_100
value: 90.155
- type: recall_at_1000
value: 97.257
- type: recall_at_3
value: 57.133
- type: recall_at_5
value: 63.424
- type: map_at_1
value: 22.405
- type: map_at_10
value: 30.043
- type: map_at_100
value: 31.191000000000003
- type: map_at_1000
value: 31.275
- type: map_at_3
value: 27.034000000000002
- type: map_at_5
value: 28.688000000000002
- type: mrr_at_1
value: 24.068
- type: mrr_at_10
value: 31.993
- type: mrr_at_100
value: 32.992
- type: mrr_at_1000
value: 33.050000000000004
- type: mrr_at_3
value: 28.964000000000002
- type: mrr_at_5
value: 30.653000000000002
- type: ndcg_at_1
value: 24.068
- type: ndcg_at_10
value: 35.198
- type: ndcg_at_100
value: 40.709
- type: ndcg_at_1000
value: 42.855
- type: ndcg_at_3
value: 29.139
- type: ndcg_at_5
value: 32.045
- type: precision_at_1
value: 24.068
- type: precision_at_10
value: 5.65
- type: precision_at_100
value: 0.885
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 12.279
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 22.405
- type: recall_at_10
value: 49.391
- type: recall_at_100
value: 74.53699999999999
- type: recall_at_1000
value: 90.605
- type: recall_at_3
value: 33.126
- type: recall_at_5
value: 40.073
- type: map_at_1
value: 13.309999999999999
- type: map_at_10
value: 20.688000000000002
- type: map_at_100
value: 22.022
- type: map_at_1000
value: 22.152
- type: map_at_3
value: 17.954
- type: map_at_5
value: 19.439
- type: mrr_at_1
value: 16.294
- type: mrr_at_10
value: 24.479
- type: mrr_at_100
value: 25.515
- type: mrr_at_1000
value: 25.593
- type: mrr_at_3
value: 21.642
- type: mrr_at_5
value: 23.189999999999998
- type: ndcg_at_1
value: 16.294
- type: ndcg_at_10
value: 25.833000000000002
- type: ndcg_at_100
value: 32.074999999999996
- type: ndcg_at_1000
value: 35.083
- type: ndcg_at_3
value: 20.493
- type: ndcg_at_5
value: 22.949
- type: precision_at_1
value: 16.294
- type: precision_at_10
value: 5.112
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 9.908999999999999
- type: precision_at_5
value: 7.587000000000001
- type: recall_at_1
value: 13.309999999999999
- type: recall_at_10
value: 37.851
- type: recall_at_100
value: 64.835
- type: recall_at_1000
value: 86.334
- type: recall_at_3
value: 23.493
- type: recall_at_5
value: 29.528
- type: map_at_1
value: 25.857999999999997
- type: map_at_10
value: 35.503
- type: map_at_100
value: 36.957
- type: map_at_1000
value: 37.065
- type: map_at_3
value: 32.275999999999996
- type: map_at_5
value: 34.119
- type: mrr_at_1
value: 31.954
- type: mrr_at_10
value: 40.851
- type: mrr_at_100
value: 41.863
- type: mrr_at_1000
value: 41.900999999999996
- type: mrr_at_3
value: 38.129999999999995
- type: mrr_at_5
value: 39.737
- type: ndcg_at_1
value: 31.954
- type: ndcg_at_10
value: 41.343999999999994
- type: ndcg_at_100
value: 47.397
- type: ndcg_at_1000
value: 49.501
- type: ndcg_at_3
value: 36.047000000000004
- type: ndcg_at_5
value: 38.639
- type: precision_at_1
value: 31.954
- type: precision_at_10
value: 7.68
- type: precision_at_100
value: 1.247
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_3
value: 17.132
- type: precision_at_5
value: 12.589
- type: recall_at_1
value: 25.857999999999997
- type: recall_at_10
value: 53.43599999999999
- type: recall_at_100
value: 78.82400000000001
- type: recall_at_1000
value: 92.78999999999999
- type: recall_at_3
value: 38.655
- type: recall_at_5
value: 45.216
- type: map_at_1
value: 24.709
- type: map_at_10
value: 34.318
- type: map_at_100
value: 35.657
- type: map_at_1000
value: 35.783
- type: map_at_3
value: 31.326999999999998
- type: map_at_5
value: 33.021
- type: mrr_at_1
value: 30.137000000000004
- type: mrr_at_10
value: 39.093
- type: mrr_at_100
value: 39.992
- type: mrr_at_1000
value: 40.056999999999995
- type: mrr_at_3
value: 36.606
- type: mrr_at_5
value: 37.861
- type: ndcg_at_1
value: 30.137000000000004
- type: ndcg_at_10
value: 39.974
- type: ndcg_at_100
value: 45.647999999999996
- type: ndcg_at_1000
value: 48.259
- type: ndcg_at_3
value: 35.028
- type: ndcg_at_5
value: 37.175999999999995
- type: precision_at_1
value: 30.137000000000004
- type: precision_at_10
value: 7.363
- type: precision_at_100
value: 1.184
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 16.857
- type: precision_at_5
value: 11.963
- type: recall_at_1
value: 24.709
- type: recall_at_10
value: 52.087
- type: recall_at_100
value: 76.125
- type: recall_at_1000
value: 93.82300000000001
- type: recall_at_3
value: 38.149
- type: recall_at_5
value: 43.984
- type: map_at_1
value: 23.40791666666667
- type: map_at_10
value: 32.458083333333335
- type: map_at_100
value: 33.691916666666664
- type: map_at_1000
value: 33.81191666666666
- type: map_at_3
value: 29.51625
- type: map_at_5
value: 31.168083333333335
- type: mrr_at_1
value: 27.96591666666666
- type: mrr_at_10
value: 36.528583333333344
- type: mrr_at_100
value: 37.404
- type: mrr_at_1000
value: 37.464333333333336
- type: mrr_at_3
value: 33.92883333333333
- type: mrr_at_5
value: 35.41933333333333
- type: ndcg_at_1
value: 27.96591666666666
- type: ndcg_at_10
value: 37.89141666666666
- type: ndcg_at_100
value: 43.23066666666666
- type: ndcg_at_1000
value: 45.63258333333333
- type: ndcg_at_3
value: 32.811249999999994
- type: ndcg_at_5
value: 35.22566666666667
- type: precision_at_1
value: 27.96591666666666
- type: precision_at_10
value: 6.834083333333332
- type: precision_at_100
value: 1.12225
- type: precision_at_1000
value: 0.15241666666666667
- type: precision_at_3
value: 15.264333333333335
- type: precision_at_5
value: 11.039416666666666
- type: recall_at_1
value: 23.40791666666667
- type: recall_at_10
value: 49.927083333333336
- type: recall_at_100
value: 73.44641666666668
- type: recall_at_1000
value: 90.19950000000001
- type: recall_at_3
value: 35.88341666666667
- type: recall_at_5
value: 42.061249999999994
- type: map_at_1
value: 19.592000000000002
- type: map_at_10
value: 26.895999999999997
- type: map_at_100
value: 27.921000000000003
- type: map_at_1000
value: 28.02
- type: map_at_3
value: 24.883
- type: map_at_5
value: 25.812
- type: mrr_at_1
value: 22.698999999999998
- type: mrr_at_10
value: 29.520999999999997
- type: mrr_at_100
value: 30.458000000000002
- type: mrr_at_1000
value: 30.526999999999997
- type: mrr_at_3
value: 27.633000000000003
- type: mrr_at_5
value: 28.483999999999998
- type: ndcg_at_1
value: 22.698999999999998
- type: ndcg_at_10
value: 31.061
- type: ndcg_at_100
value: 36.398
- type: ndcg_at_1000
value: 38.89
- type: ndcg_at_3
value: 27.149
- type: ndcg_at_5
value: 28.627000000000002
- type: precision_at_1
value: 22.698999999999998
- type: precision_at_10
value: 5.106999999999999
- type: precision_at_100
value: 0.857
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 11.963
- type: precision_at_5
value: 8.221
- type: recall_at_1
value: 19.592000000000002
- type: recall_at_10
value: 41.329
- type: recall_at_100
value: 66.094
- type: recall_at_1000
value: 84.511
- type: recall_at_3
value: 30.61
- type: recall_at_5
value: 34.213
- type: map_at_1
value: 14.71
- type: map_at_10
value: 20.965
- type: map_at_100
value: 21.994
- type: map_at_1000
value: 22.133
- type: map_at_3
value: 18.741
- type: map_at_5
value: 19.951
- type: mrr_at_1
value: 18.307000000000002
- type: mrr_at_10
value: 24.66
- type: mrr_at_100
value: 25.540000000000003
- type: mrr_at_1000
value: 25.629
- type: mrr_at_3
value: 22.511
- type: mrr_at_5
value: 23.72
- type: ndcg_at_1
value: 18.307000000000002
- type: ndcg_at_10
value: 25.153
- type: ndcg_at_100
value: 30.229
- type: ndcg_at_1000
value: 33.623
- type: ndcg_at_3
value: 21.203
- type: ndcg_at_5
value: 23.006999999999998
- type: precision_at_1
value: 18.307000000000002
- type: precision_at_10
value: 4.725
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.14
- type: precision_at_5
value: 7.481
- type: recall_at_1
value: 14.71
- type: recall_at_10
value: 34.087
- type: recall_at_100
value: 57.147999999999996
- type: recall_at_1000
value: 81.777
- type: recall_at_3
value: 22.996
- type: recall_at_5
value: 27.73
- type: map_at_1
value: 23.472
- type: map_at_10
value: 32.699
- type: map_at_100
value: 33.867000000000004
- type: map_at_1000
value: 33.967000000000006
- type: map_at_3
value: 29.718
- type: map_at_5
value: 31.345
- type: mrr_at_1
value: 28.265
- type: mrr_at_10
value: 36.945
- type: mrr_at_100
value: 37.794
- type: mrr_at_1000
value: 37.857
- type: mrr_at_3
value: 34.266000000000005
- type: mrr_at_5
value: 35.768
- type: ndcg_at_1
value: 28.265
- type: ndcg_at_10
value: 38.35
- type: ndcg_at_100
value: 43.739
- type: ndcg_at_1000
value: 46.087
- type: ndcg_at_3
value: 33.004
- type: ndcg_at_5
value: 35.411
- type: precision_at_1
value: 28.265
- type: precision_at_10
value: 6.715999999999999
- type: precision_at_100
value: 1.059
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 15.299
- type: precision_at_5
value: 10.951
- type: recall_at_1
value: 23.472
- type: recall_at_10
value: 51.413
- type: recall_at_100
value: 75.17
- type: recall_at_1000
value: 91.577
- type: recall_at_3
value: 36.651
- type: recall_at_5
value: 42.814
- type: map_at_1
value: 23.666
- type: map_at_10
value: 32.963
- type: map_at_100
value: 34.544999999999995
- type: map_at_1000
value: 34.792
- type: map_at_3
value: 29.74
- type: map_at_5
value: 31.5
- type: mrr_at_1
value: 29.051
- type: mrr_at_10
value: 38.013000000000005
- type: mrr_at_100
value: 38.997
- type: mrr_at_1000
value: 39.055
- type: mrr_at_3
value: 34.947
- type: mrr_at_5
value: 36.815
- type: ndcg_at_1
value: 29.051
- type: ndcg_at_10
value: 39.361000000000004
- type: ndcg_at_100
value: 45.186
- type: ndcg_at_1000
value: 47.867
- type: ndcg_at_3
value: 33.797
- type: ndcg_at_5
value: 36.456
- type: precision_at_1
value: 29.051
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.532
- type: precision_at_1000
value: 0.247
- type: precision_at_3
value: 15.876000000000001
- type: precision_at_5
value: 11.779
- type: recall_at_1
value: 23.666
- type: recall_at_10
value: 51.858000000000004
- type: recall_at_100
value: 77.805
- type: recall_at_1000
value: 94.504
- type: recall_at_3
value: 36.207
- type: recall_at_5
value: 43.094
- type: map_at_1
value: 15.662
- type: map_at_10
value: 23.594
- type: map_at_100
value: 24.593999999999998
- type: map_at_1000
value: 24.694
- type: map_at_3
value: 20.925
- type: map_at_5
value: 22.817999999999998
- type: mrr_at_1
value: 17.375
- type: mrr_at_10
value: 25.734
- type: mrr_at_100
value: 26.586
- type: mrr_at_1000
value: 26.671
- type: mrr_at_3
value: 23.044
- type: mrr_at_5
value: 24.975
- type: ndcg_at_1
value: 17.375
- type: ndcg_at_10
value: 28.186
- type: ndcg_at_100
value: 33.436
- type: ndcg_at_1000
value: 36.203
- type: ndcg_at_3
value: 23.152
- type: ndcg_at_5
value: 26.397
- type: precision_at_1
value: 17.375
- type: precision_at_10
value: 4.677
- type: precision_at_100
value: 0.786
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 10.351
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 15.662
- type: recall_at_10
value: 40.066
- type: recall_at_100
value: 65.006
- type: recall_at_1000
value: 85.94000000000001
- type: recall_at_3
value: 27.400000000000002
- type: recall_at_5
value: 35.002
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.853
- type: map_at_10
value: 15.568000000000001
- type: map_at_100
value: 17.383000000000003
- type: map_at_1000
value: 17.584
- type: map_at_3
value: 12.561
- type: map_at_5
value: 14.056
- type: mrr_at_1
value: 18.958
- type: mrr_at_10
value: 28.288000000000004
- type: mrr_at_100
value: 29.432000000000002
- type: mrr_at_1000
value: 29.498
- type: mrr_at_3
value: 25.049
- type: mrr_at_5
value: 26.857
- type: ndcg_at_1
value: 18.958
- type: ndcg_at_10
value: 22.21
- type: ndcg_at_100
value: 29.596
- type: ndcg_at_1000
value: 33.583
- type: ndcg_at_3
value: 16.994999999999997
- type: ndcg_at_5
value: 18.95
- type: precision_at_1
value: 18.958
- type: precision_at_10
value: 7.192
- type: precision_at_100
value: 1.5
- type: precision_at_1000
value: 0.22399999999999998
- type: precision_at_3
value: 12.573
- type: precision_at_5
value: 10.202
- type: recall_at_1
value: 8.853
- type: recall_at_10
value: 28.087
- type: recall_at_100
value: 53.701
- type: recall_at_1000
value: 76.29899999999999
- type: recall_at_3
value: 15.913
- type: recall_at_5
value: 20.658
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.077
- type: map_at_10
value: 20.788999999999998
- type: map_at_100
value: 30.429000000000002
- type: map_at_1000
value: 32.143
- type: map_at_3
value: 14.692
- type: map_at_5
value: 17.139
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.036
- type: mrr_at_100
value: 78.401
- type: mrr_at_1000
value: 78.404
- type: mrr_at_3
value: 76.75
- type: mrr_at_5
value: 77.47500000000001
- type: ndcg_at_1
value: 58.12500000000001
- type: ndcg_at_10
value: 44.015
- type: ndcg_at_100
value: 49.247
- type: ndcg_at_1000
value: 56.211999999999996
- type: ndcg_at_3
value: 49.151
- type: ndcg_at_5
value: 46.195
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 35.5
- type: precision_at_100
value: 11.355
- type: precision_at_1000
value: 2.1950000000000003
- type: precision_at_3
value: 53.083000000000006
- type: precision_at_5
value: 44.800000000000004
- type: recall_at_1
value: 9.077
- type: recall_at_10
value: 26.259
- type: recall_at_100
value: 56.547000000000004
- type: recall_at_1000
value: 78.551
- type: recall_at_3
value: 16.162000000000003
- type: recall_at_5
value: 19.753999999999998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 49.44500000000001
- type: f1
value: 44.67067691783401
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.182
- type: map_at_10
value: 78.223
- type: map_at_100
value: 78.498
- type: map_at_1000
value: 78.512
- type: map_at_3
value: 76.71
- type: map_at_5
value: 77.725
- type: mrr_at_1
value: 73.177
- type: mrr_at_10
value: 82.513
- type: mrr_at_100
value: 82.633
- type: mrr_at_1000
value: 82.635
- type: mrr_at_3
value: 81.376
- type: mrr_at_5
value: 82.182
- type: ndcg_at_1
value: 73.177
- type: ndcg_at_10
value: 82.829
- type: ndcg_at_100
value: 83.84
- type: ndcg_at_1000
value: 84.07900000000001
- type: ndcg_at_3
value: 80.303
- type: ndcg_at_5
value: 81.846
- type: precision_at_1
value: 73.177
- type: precision_at_10
value: 10.241999999999999
- type: precision_at_100
value: 1.099
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 31.247999999999998
- type: precision_at_5
value: 19.697
- type: recall_at_1
value: 68.182
- type: recall_at_10
value: 92.657
- type: recall_at_100
value: 96.709
- type: recall_at_1000
value: 98.184
- type: recall_at_3
value: 85.9
- type: recall_at_5
value: 89.755
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.108
- type: map_at_10
value: 33.342
- type: map_at_100
value: 35.281
- type: map_at_1000
value: 35.478
- type: map_at_3
value: 29.067
- type: map_at_5
value: 31.563000000000002
- type: mrr_at_1
value: 41.667
- type: mrr_at_10
value: 49.913000000000004
- type: mrr_at_100
value: 50.724000000000004
- type: mrr_at_1000
value: 50.766
- type: mrr_at_3
value: 47.504999999999995
- type: mrr_at_5
value: 49.033
- type: ndcg_at_1
value: 41.667
- type: ndcg_at_10
value: 41.144
- type: ndcg_at_100
value: 48.326
- type: ndcg_at_1000
value: 51.486
- type: ndcg_at_3
value: 37.486999999999995
- type: ndcg_at_5
value: 38.78
- type: precision_at_1
value: 41.667
- type: precision_at_10
value: 11.358
- type: precision_at_100
value: 1.873
- type: precision_at_1000
value: 0.244
- type: precision_at_3
value: 25
- type: precision_at_5
value: 18.519
- type: recall_at_1
value: 21.108
- type: recall_at_10
value: 47.249
- type: recall_at_100
value: 74.52
- type: recall_at_1000
value: 93.31
- type: recall_at_3
value: 33.271
- type: recall_at_5
value: 39.723000000000006
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.317
- type: map_at_10
value: 64.861
- type: map_at_100
value: 65.697
- type: map_at_1000
value: 65.755
- type: map_at_3
value: 61.258
- type: map_at_5
value: 63.590999999999994
- type: mrr_at_1
value: 80.635
- type: mrr_at_10
value: 86.528
- type: mrr_at_100
value: 86.66199999999999
- type: mrr_at_1000
value: 86.666
- type: mrr_at_3
value: 85.744
- type: mrr_at_5
value: 86.24300000000001
- type: ndcg_at_1
value: 80.635
- type: ndcg_at_10
value: 73.13199999999999
- type: ndcg_at_100
value: 75.927
- type: ndcg_at_1000
value: 76.976
- type: ndcg_at_3
value: 68.241
- type: ndcg_at_5
value: 71.071
- type: precision_at_1
value: 80.635
- type: precision_at_10
value: 15.326
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 43.961
- type: precision_at_5
value: 28.599999999999998
- type: recall_at_1
value: 40.317
- type: recall_at_10
value: 76.631
- type: recall_at_100
value: 87.495
- type: recall_at_1000
value: 94.362
- type: recall_at_3
value: 65.94200000000001
- type: recall_at_5
value: 71.499
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 91.686
- type: ap
value: 87.5577120393173
- type: f1
value: 91.6629447355139
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.702
- type: map_at_10
value: 36.414
- type: map_at_100
value: 37.561
- type: map_at_1000
value: 37.605
- type: map_at_3
value: 32.456
- type: map_at_5
value: 34.827000000000005
- type: mrr_at_1
value: 24.355
- type: mrr_at_10
value: 37.01
- type: mrr_at_100
value: 38.085
- type: mrr_at_1000
value: 38.123000000000005
- type: mrr_at_3
value: 33.117999999999995
- type: mrr_at_5
value: 35.452
- type: ndcg_at_1
value: 24.384
- type: ndcg_at_10
value: 43.456
- type: ndcg_at_100
value: 48.892
- type: ndcg_at_1000
value: 49.964
- type: ndcg_at_3
value: 35.475
- type: ndcg_at_5
value: 39.711
- type: precision_at_1
value: 24.384
- type: precision_at_10
value: 6.7940000000000005
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.052999999999999
- type: precision_at_5
value: 11.189
- type: recall_at_1
value: 23.702
- type: recall_at_10
value: 65.057
- type: recall_at_100
value: 90.021
- type: recall_at_1000
value: 98.142
- type: recall_at_3
value: 43.551
- type: recall_at_5
value: 53.738
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.62380300957591
- type: f1
value: 94.49871222100734
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.14090287277702
- type: f1
value: 60.32101258220515
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.84330867518494
- type: f1
value: 71.92248688515255
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.10692669804976
- type: f1
value: 77.9904839122866
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.822988923078444
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.38394880253403
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.82504612539082
- type: mrr
value: 32.84462298174977
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.029
- type: map_at_10
value: 14.088999999999999
- type: map_at_100
value: 17.601
- type: map_at_1000
value: 19.144
- type: map_at_3
value: 10.156
- type: map_at_5
value: 11.892
- type: mrr_at_1
value: 46.44
- type: mrr_at_10
value: 56.596999999999994
- type: mrr_at_100
value: 57.11000000000001
- type: mrr_at_1000
value: 57.14
- type: mrr_at_3
value: 54.334
- type: mrr_at_5
value: 55.774
- type: ndcg_at_1
value: 44.891999999999996
- type: ndcg_at_10
value: 37.134
- type: ndcg_at_100
value: 33.652
- type: ndcg_at_1000
value: 42.548
- type: ndcg_at_3
value: 41.851
- type: ndcg_at_5
value: 39.842
- type: precision_at_1
value: 46.44
- type: precision_at_10
value: 27.647
- type: precision_at_100
value: 8.309999999999999
- type: precision_at_1000
value: 2.146
- type: precision_at_3
value: 39.422000000000004
- type: precision_at_5
value: 34.675
- type: recall_at_1
value: 6.029
- type: recall_at_10
value: 18.907
- type: recall_at_100
value: 33.76
- type: recall_at_1000
value: 65.14999999999999
- type: recall_at_3
value: 11.584999999999999
- type: recall_at_5
value: 14.626
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.373000000000005
- type: map_at_10
value: 55.836
- type: map_at_100
value: 56.611999999999995
- type: map_at_1000
value: 56.63
- type: map_at_3
value: 51.747
- type: map_at_5
value: 54.337999999999994
- type: mrr_at_1
value: 44.147999999999996
- type: mrr_at_10
value: 58.42699999999999
- type: mrr_at_100
value: 58.902
- type: mrr_at_1000
value: 58.914
- type: mrr_at_3
value: 55.156000000000006
- type: mrr_at_5
value: 57.291000000000004
- type: ndcg_at_1
value: 44.119
- type: ndcg_at_10
value: 63.444
- type: ndcg_at_100
value: 66.40599999999999
- type: ndcg_at_1000
value: 66.822
- type: ndcg_at_3
value: 55.962
- type: ndcg_at_5
value: 60.228
- type: precision_at_1
value: 44.119
- type: precision_at_10
value: 10.006
- type: precision_at_100
value: 1.17
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.135
- type: precision_at_5
value: 17.59
- type: recall_at_1
value: 39.373000000000005
- type: recall_at_10
value: 83.78999999999999
- type: recall_at_100
value: 96.246
- type: recall_at_1000
value: 99.324
- type: recall_at_3
value: 64.71900000000001
- type: recall_at_5
value: 74.508
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.199
- type: map_at_10
value: 82.892
- type: map_at_100
value: 83.578
- type: map_at_1000
value: 83.598
- type: map_at_3
value: 79.948
- type: map_at_5
value: 81.779
- type: mrr_at_1
value: 79.67
- type: mrr_at_10
value: 86.115
- type: mrr_at_100
value: 86.249
- type: mrr_at_1000
value: 86.251
- type: mrr_at_3
value: 85.08200000000001
- type: mrr_at_5
value: 85.783
- type: ndcg_at_1
value: 79.67
- type: ndcg_at_10
value: 86.839
- type: ndcg_at_100
value: 88.252
- type: ndcg_at_1000
value: 88.401
- type: ndcg_at_3
value: 83.86200000000001
- type: ndcg_at_5
value: 85.473
- type: precision_at_1
value: 79.67
- type: precision_at_10
value: 13.19
- type: precision_at_100
value: 1.521
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 36.677
- type: precision_at_5
value: 24.118000000000002
- type: recall_at_1
value: 69.199
- type: recall_at_10
value: 94.321
- type: recall_at_100
value: 99.20400000000001
- type: recall_at_1000
value: 99.947
- type: recall_at_3
value: 85.787
- type: recall_at_5
value: 90.365
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82810046856353
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.38132611783628
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.127000000000001
- type: map_at_10
value: 12.235
- type: map_at_100
value: 14.417
- type: map_at_1000
value: 14.75
- type: map_at_3
value: 8.906
- type: map_at_5
value: 10.591000000000001
- type: mrr_at_1
value: 25.2
- type: mrr_at_10
value: 35.879
- type: mrr_at_100
value: 36.935
- type: mrr_at_1000
value: 36.997
- type: mrr_at_3
value: 32.783
- type: mrr_at_5
value: 34.367999999999995
- type: ndcg_at_1
value: 25.2
- type: ndcg_at_10
value: 20.509
- type: ndcg_at_100
value: 28.67
- type: ndcg_at_1000
value: 34.42
- type: ndcg_at_3
value: 19.948
- type: ndcg_at_5
value: 17.166
- type: precision_at_1
value: 25.2
- type: precision_at_10
value: 10.440000000000001
- type: precision_at_100
value: 2.214
- type: precision_at_1000
value: 0.359
- type: precision_at_3
value: 18.533
- type: precision_at_5
value: 14.860000000000001
- type: recall_at_1
value: 5.127000000000001
- type: recall_at_10
value: 21.147
- type: recall_at_100
value: 44.946999999999996
- type: recall_at_1000
value: 72.89
- type: recall_at_3
value: 11.277
- type: recall_at_5
value: 15.042
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.0373011786213
- type: cos_sim_spearman
value: 79.27889560856613
- type: euclidean_pearson
value: 80.31186315495655
- type: euclidean_spearman
value: 79.41630415280811
- type: manhattan_pearson
value: 80.31755140442013
- type: manhattan_spearman
value: 79.43069870027611
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.8659751342045
- type: cos_sim_spearman
value: 76.95377612997667
- type: euclidean_pearson
value: 81.24552945497848
- type: euclidean_spearman
value: 77.18236963555253
- type: manhattan_pearson
value: 81.26477607759037
- type: manhattan_spearman
value: 77.13821753062756
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.34597139044875
- type: cos_sim_spearman
value: 84.124169425592
- type: euclidean_pearson
value: 83.68590721511401
- type: euclidean_spearman
value: 84.18846190846398
- type: manhattan_pearson
value: 83.57630235061498
- type: manhattan_spearman
value: 84.10244043726902
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.67641885599572
- type: cos_sim_spearman
value: 80.46450725650428
- type: euclidean_pearson
value: 81.61645042715865
- type: euclidean_spearman
value: 80.61418394236874
- type: manhattan_pearson
value: 81.55712034928871
- type: manhattan_spearman
value: 80.57905670523951
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.86650310886782
- type: cos_sim_spearman
value: 89.76081629222328
- type: euclidean_pearson
value: 89.1530747029954
- type: euclidean_spearman
value: 89.80990657280248
- type: manhattan_pearson
value: 89.10640563278132
- type: manhattan_spearman
value: 89.76282108434047
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.93864027911118
- type: cos_sim_spearman
value: 85.47096193999023
- type: euclidean_pearson
value: 85.03141840870533
- type: euclidean_spearman
value: 85.43124029598181
- type: manhattan_pearson
value: 84.99002664393512
- type: manhattan_spearman
value: 85.39169195120834
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.7045343749832
- type: cos_sim_spearman
value: 89.03262221146677
- type: euclidean_pearson
value: 89.56078218264365
- type: euclidean_spearman
value: 89.17827006466868
- type: manhattan_pearson
value: 89.52717595468582
- type: manhattan_spearman
value: 89.15878115952923
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.20191302875551
- type: cos_sim_spearman
value: 64.11446552557646
- type: euclidean_pearson
value: 64.6918197393619
- type: euclidean_spearman
value: 63.440182631197764
- type: manhattan_pearson
value: 64.55692904121835
- type: manhattan_spearman
value: 63.424877742756266
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.37793104662344
- type: cos_sim_spearman
value: 87.7357802629067
- type: euclidean_pearson
value: 87.4286301545109
- type: euclidean_spearman
value: 87.78452920777421
- type: manhattan_pearson
value: 87.42445169331255
- type: manhattan_spearman
value: 87.78537677249598
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 84.31465405081792
- type: mrr
value: 95.7173781193389
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.904
- type: map_at_100
value: 68.539
- type: map_at_1000
value: 68.562
- type: map_at_3
value: 65.415
- type: map_at_5
value: 66.788
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 68.797
- type: mrr_at_100
value: 69.236
- type: mrr_at_1000
value: 69.257
- type: mrr_at_3
value: 66.667
- type: mrr_at_5
value: 67.967
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 72.24199999999999
- type: ndcg_at_100
value: 74.86
- type: ndcg_at_1000
value: 75.354
- type: ndcg_at_3
value: 67.93400000000001
- type: ndcg_at_5
value: 70.02199999999999
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.533
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.778000000000002
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.383
- type: recall_at_100
value: 96.267
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 72.628
- type: recall_at_5
value: 78.094
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.8029702970297
- type: cos_sim_ap
value: 94.9210324173411
- type: cos_sim_f1
value: 89.8521162672106
- type: cos_sim_precision
value: 91.67533818938605
- type: cos_sim_recall
value: 88.1
- type: dot_accuracy
value: 99.69504950495049
- type: dot_ap
value: 90.4919719146181
- type: dot_f1
value: 84.72289156626506
- type: dot_precision
value: 81.76744186046511
- type: dot_recall
value: 87.9
- type: euclidean_accuracy
value: 99.79702970297029
- type: euclidean_ap
value: 94.87827463795753
- type: euclidean_f1
value: 89.55680081507896
- type: euclidean_precision
value: 91.27725856697819
- type: euclidean_recall
value: 87.9
- type: manhattan_accuracy
value: 99.7990099009901
- type: manhattan_ap
value: 94.87587025149682
- type: manhattan_f1
value: 89.76298537569339
- type: manhattan_precision
value: 90.53916581892166
- type: manhattan_recall
value: 89
- type: max_accuracy
value: 99.8029702970297
- type: max_ap
value: 94.9210324173411
- type: max_f1
value: 89.8521162672106
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.92385753948724
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 33.671756975431144
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.677928036739004
- type: mrr
value: 51.56413133435193
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.523589340819683
- type: cos_sim_spearman
value: 30.187407518823235
- type: dot_pearson
value: 29.039713969699015
- type: dot_spearman
value: 29.114740651155508
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.211
- type: map_at_10
value: 1.6199999999999999
- type: map_at_100
value: 8.658000000000001
- type: map_at_1000
value: 21.538
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.919
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.18599999999999
- type: mrr_at_100
value: 86.18599999999999
- type: mrr_at_1000
value: 86.18599999999999
- type: mrr_at_3
value: 85
- type: mrr_at_5
value: 85.9
- type: ndcg_at_1
value: 74
- type: ndcg_at_10
value: 66.542
- type: ndcg_at_100
value: 50.163999999999994
- type: ndcg_at_1000
value: 45.696999999999996
- type: ndcg_at_3
value: 71.531
- type: ndcg_at_5
value: 70.45
- type: precision_at_1
value: 78
- type: precision_at_10
value: 69.39999999999999
- type: precision_at_100
value: 51.06
- type: precision_at_1000
value: 20.022000000000002
- type: precision_at_3
value: 76
- type: precision_at_5
value: 74.8
- type: recall_at_1
value: 0.211
- type: recall_at_10
value: 1.813
- type: recall_at_100
value: 12.098
- type: recall_at_1000
value: 42.618
- type: recall_at_3
value: 0.603
- type: recall_at_5
value: 0.987
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.2079999999999997
- type: map_at_10
value: 7.777000000000001
- type: map_at_100
value: 12.825000000000001
- type: map_at_1000
value: 14.196
- type: map_at_3
value: 4.285
- type: map_at_5
value: 6.177
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 42.635
- type: mrr_at_100
value: 43.955
- type: mrr_at_1000
value: 43.955
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 41.088
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 20.666999999999998
- type: ndcg_at_100
value: 31.840000000000003
- type: ndcg_at_1000
value: 43.191
- type: ndcg_at_3
value: 23.45
- type: ndcg_at_5
value: 22.994
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 17.959
- type: precision_at_100
value: 6.755
- type: precision_at_1000
value: 1.4200000000000002
- type: precision_at_3
value: 23.810000000000002
- type: precision_at_5
value: 23.673
- type: recall_at_1
value: 2.2079999999999997
- type: recall_at_10
value: 13.144
- type: recall_at_100
value: 42.491
- type: recall_at_1000
value: 77.04299999999999
- type: recall_at_3
value: 5.3469999999999995
- type: recall_at_5
value: 9.139
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.9044
- type: ap
value: 14.625783489340755
- type: f1
value: 54.814936562590546
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.94227504244483
- type: f1
value: 61.22516038508854
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.602409155145864
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.94641473445789
- type: cos_sim_ap
value: 76.91572747061197
- type: cos_sim_f1
value: 70.14348097317529
- type: cos_sim_precision
value: 66.53254437869822
- type: cos_sim_recall
value: 74.1688654353562
- type: dot_accuracy
value: 84.80061989628658
- type: dot_ap
value: 70.7952548895177
- type: dot_f1
value: 65.44780728844965
- type: dot_precision
value: 61.53310104529617
- type: dot_recall
value: 69.89445910290237
- type: euclidean_accuracy
value: 86.94641473445789
- type: euclidean_ap
value: 76.80774009393652
- type: euclidean_f1
value: 70.30522503879979
- type: euclidean_precision
value: 68.94977168949772
- type: euclidean_recall
value: 71.71503957783642
- type: manhattan_accuracy
value: 86.8629671574179
- type: manhattan_ap
value: 76.76518632600317
- type: manhattan_f1
value: 70.16056518946692
- type: manhattan_precision
value: 68.360450563204
- type: manhattan_recall
value: 72.0580474934037
- type: max_accuracy
value: 86.94641473445789
- type: max_ap
value: 76.91572747061197
- type: max_f1
value: 70.30522503879979
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.10428066907285
- type: cos_sim_ap
value: 86.25114759921435
- type: cos_sim_f1
value: 78.37857884586856
- type: cos_sim_precision
value: 75.60818546078993
- type: cos_sim_recall
value: 81.35971666153372
- type: dot_accuracy
value: 87.41995575736406
- type: dot_ap
value: 81.51838010086782
- type: dot_f1
value: 74.77398015435503
- type: dot_precision
value: 71.53002390662354
- type: dot_recall
value: 78.32614721281182
- type: euclidean_accuracy
value: 89.12368533395428
- type: euclidean_ap
value: 86.33456799874504
- type: euclidean_f1
value: 78.45496750232127
- type: euclidean_precision
value: 75.78388462366364
- type: euclidean_recall
value: 81.32121958731136
- type: manhattan_accuracy
value: 89.10622113556099
- type: manhattan_ap
value: 86.31215061745333
- type: manhattan_f1
value: 78.40684906011539
- type: manhattan_precision
value: 75.89536643366722
- type: manhattan_recall
value: 81.09023714197721
- type: max_accuracy
value: 89.12368533395428
- type: max_ap
value: 86.33456799874504
- type: max_f1
value: 78.45496750232127
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2)
```bash
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
```
```python
# from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-e5-large-v2"
model_name_orig="intfloat/e5-large-v2"
from hf_hub_ctranslate2 import EncoderCT2fromHfHub
model = EncoderCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16"
)
outputs = model.generate(
text=["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
max_length=64,
) # perform downstream tasks on outputs
outputs["pooler_output"]
outputs["last_hidden_state"]
outputs["attention_mask"]
# alternative, use SentenceTransformer Mix-In
# for end-to-end Sentence embeddings generation
# (not pulling from this CT2fast-HF repo)
from hf_hub_ctranslate2 import CT2SentenceTransformer
model = CT2SentenceTransformer(
model_name_orig, compute_type="int8_float16", device="cuda"
)
embeddings = model.encode(
["I like soccer", "I like tennis", "The eiffel tower is in Paris"],
batch_size=32,
convert_to_numpy=True,
normalize_embeddings=True,
)
print(embeddings.shape, embeddings)
scores = (embeddings @ embeddings.T) * 100
# Hint: you can also host this code via REST API and
# via github.com/michaelfeil/infinity
```
Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
Converted on 2023-10-13 using
```
LLama-2 -> removed <pad> token.
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# E5-large-v2
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 24 layers and the embedding size is 1024.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ".
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."]
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-v2')
model = AutoModel.from_pretrained('intfloat/e5-large-v2')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Training Details
Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-large-v2')
input_texts = [
'query: how much protein should a female eat',
'query: summit define',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
This model only works for English texts. Long texts will be truncated to at most 512 tokens.
| [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
mixamrepijey/instructor-models | mixamrepijey | sentence-similarity | [
"sentence-transformers",
"pytorch",
"t5",
"text-embedding",
"embeddings",
"information-retrieval",
"beir",
"text-classification",
"language-model",
"text-clustering",
"text-semantic-similarity",
"text-evaluation",
"prompt-retrieval",
"text-reranking",
"feature-extraction",
"sentence-similarity",
"transformers",
"English",
"Sentence Similarity",
"natural_questions",
"ms_marco",
"fever",
"hotpot_qa",
"mteb",
"en",
"arxiv:2212.09741",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,706 | 1,706 | 21 | 0 | ---
language: en
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- text-embedding
- embeddings
- information-retrieval
- beir
- text-classification
- language-model
- text-clustering
- text-semantic-similarity
- text-evaluation
- prompt-retrieval
- text-reranking
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- t5
- English
- Sentence Similarity
- natural_questions
- ms_marco
- fever
- hotpot_qa
- mteb
inference: false
model-index:
- name: INSTRUCTOR
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 88.13432835820896
- type: ap
value: 59.298209334395665
- type: f1
value: 83.31769058643586
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.526375
- type: ap
value: 88.16327709705504
- type: f1
value: 91.51095801287843
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.856
- type: f1
value: 45.41490917650942
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.223
- type: map_at_10
value: 47.947
- type: map_at_100
value: 48.742000000000004
- type: map_at_1000
value: 48.745
- type: map_at_3
value: 43.137
- type: map_at_5
value: 45.992
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 48.4
- type: mrr_at_100
value: 49.202
- type: mrr_at_1000
value: 49.205
- type: mrr_at_3
value: 43.551
- type: mrr_at_5
value: 46.467999999999996
- type: ndcg_at_1
value: 31.223
- type: ndcg_at_10
value: 57.045
- type: ndcg_at_100
value: 60.175
- type: ndcg_at_1000
value: 60.233000000000004
- type: ndcg_at_3
value: 47.171
- type: ndcg_at_5
value: 52.322
- type: precision_at_1
value: 31.223
- type: precision_at_10
value: 8.599
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.63
- type: precision_at_5
value: 14.282
- type: recall_at_1
value: 31.223
- type: recall_at_10
value: 85.989
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 58.89
- type: recall_at_5
value: 71.408
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 43.1621946393635
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 32.56417132407894
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.29539304390207
- type: mrr
value: 76.44484017060196
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_spearman
value: 84.38746499431112
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 78.51298701298701
- type: f1
value: 77.49041754069235
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.61848554098577
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 31.32623280148178
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.803000000000004
- type: map_at_10
value: 48.848
- type: map_at_100
value: 50.5
- type: map_at_1000
value: 50.602999999999994
- type: map_at_3
value: 45.111000000000004
- type: map_at_5
value: 47.202
- type: mrr_at_1
value: 44.635000000000005
- type: mrr_at_10
value: 55.593
- type: mrr_at_100
value: 56.169999999999995
- type: mrr_at_1000
value: 56.19499999999999
- type: mrr_at_3
value: 53.361999999999995
- type: mrr_at_5
value: 54.806999999999995
- type: ndcg_at_1
value: 44.635000000000005
- type: ndcg_at_10
value: 55.899
- type: ndcg_at_100
value: 60.958
- type: ndcg_at_1000
value: 62.302
- type: ndcg_at_3
value: 51.051
- type: ndcg_at_5
value: 53.351000000000006
- type: precision_at_1
value: 44.635000000000005
- type: precision_at_10
value: 10.786999999999999
- type: precision_at_100
value: 1.6580000000000001
- type: precision_at_1000
value: 0.213
- type: precision_at_3
value: 24.893
- type: precision_at_5
value: 17.740000000000002
- type: recall_at_1
value: 35.803000000000004
- type: recall_at_10
value: 68.657
- type: recall_at_100
value: 89.77199999999999
- type: recall_at_1000
value: 97.67
- type: recall_at_3
value: 54.066
- type: recall_at_5
value: 60.788
- type: map_at_1
value: 33.706
- type: map_at_10
value: 44.896
- type: map_at_100
value: 46.299
- type: map_at_1000
value: 46.44
- type: map_at_3
value: 41.721000000000004
- type: map_at_5
value: 43.486000000000004
- type: mrr_at_1
value: 41.592
- type: mrr_at_10
value: 50.529
- type: mrr_at_100
value: 51.22
- type: mrr_at_1000
value: 51.258
- type: mrr_at_3
value: 48.205999999999996
- type: mrr_at_5
value: 49.528
- type: ndcg_at_1
value: 41.592
- type: ndcg_at_10
value: 50.77199999999999
- type: ndcg_at_100
value: 55.383
- type: ndcg_at_1000
value: 57.288
- type: ndcg_at_3
value: 46.324
- type: ndcg_at_5
value: 48.346000000000004
- type: precision_at_1
value: 41.592
- type: precision_at_10
value: 9.516
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 22.399
- type: precision_at_5
value: 15.770999999999999
- type: recall_at_1
value: 33.706
- type: recall_at_10
value: 61.353
- type: recall_at_100
value: 80.182
- type: recall_at_1000
value: 91.896
- type: recall_at_3
value: 48.204
- type: recall_at_5
value: 53.89699999999999
- type: map_at_1
value: 44.424
- type: map_at_10
value: 57.169000000000004
- type: map_at_100
value: 58.202
- type: map_at_1000
value: 58.242000000000004
- type: map_at_3
value: 53.825
- type: map_at_5
value: 55.714
- type: mrr_at_1
value: 50.470000000000006
- type: mrr_at_10
value: 60.489000000000004
- type: mrr_at_100
value: 61.096
- type: mrr_at_1000
value: 61.112
- type: mrr_at_3
value: 58.192
- type: mrr_at_5
value: 59.611999999999995
- type: ndcg_at_1
value: 50.470000000000006
- type: ndcg_at_10
value: 63.071999999999996
- type: ndcg_at_100
value: 66.964
- type: ndcg_at_1000
value: 67.659
- type: ndcg_at_3
value: 57.74399999999999
- type: ndcg_at_5
value: 60.367000000000004
- type: precision_at_1
value: 50.470000000000006
- type: precision_at_10
value: 10.019
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 25.558999999999997
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 44.424
- type: recall_at_10
value: 77.02
- type: recall_at_100
value: 93.738
- type: recall_at_1000
value: 98.451
- type: recall_at_3
value: 62.888
- type: recall_at_5
value: 69.138
- type: map_at_1
value: 26.294
- type: map_at_10
value: 34.503
- type: map_at_100
value: 35.641
- type: map_at_1000
value: 35.724000000000004
- type: map_at_3
value: 31.753999999999998
- type: map_at_5
value: 33.190999999999995
- type: mrr_at_1
value: 28.362
- type: mrr_at_10
value: 36.53
- type: mrr_at_100
value: 37.541000000000004
- type: mrr_at_1000
value: 37.602000000000004
- type: mrr_at_3
value: 33.917
- type: mrr_at_5
value: 35.358000000000004
- type: ndcg_at_1
value: 28.362
- type: ndcg_at_10
value: 39.513999999999996
- type: ndcg_at_100
value: 44.815
- type: ndcg_at_1000
value: 46.839
- type: ndcg_at_3
value: 34.02
- type: ndcg_at_5
value: 36.522
- type: precision_at_1
value: 28.362
- type: precision_at_10
value: 6.101999999999999
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.161999999999999
- type: precision_at_5
value: 9.966
- type: recall_at_1
value: 26.294
- type: recall_at_10
value: 53.098
- type: recall_at_100
value: 76.877
- type: recall_at_1000
value: 91.834
- type: recall_at_3
value: 38.266
- type: recall_at_5
value: 44.287
- type: map_at_1
value: 16.407
- type: map_at_10
value: 25.185999999999996
- type: map_at_100
value: 26.533
- type: map_at_1000
value: 26.657999999999998
- type: map_at_3
value: 22.201999999999998
- type: map_at_5
value: 23.923
- type: mrr_at_1
value: 20.522000000000002
- type: mrr_at_10
value: 29.522
- type: mrr_at_100
value: 30.644
- type: mrr_at_1000
value: 30.713
- type: mrr_at_3
value: 26.679000000000002
- type: mrr_at_5
value: 28.483000000000004
- type: ndcg_at_1
value: 20.522000000000002
- type: ndcg_at_10
value: 30.656
- type: ndcg_at_100
value: 36.864999999999995
- type: ndcg_at_1000
value: 39.675
- type: ndcg_at_3
value: 25.319000000000003
- type: ndcg_at_5
value: 27.992
- type: precision_at_1
value: 20.522000000000002
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 1.027
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 12.396
- type: precision_at_5
value: 9.328
- type: recall_at_1
value: 16.407
- type: recall_at_10
value: 43.164
- type: recall_at_100
value: 69.695
- type: recall_at_1000
value: 89.41900000000001
- type: recall_at_3
value: 28.634999999999998
- type: recall_at_5
value: 35.308
- type: map_at_1
value: 30.473
- type: map_at_10
value: 41.676
- type: map_at_100
value: 43.120999999999995
- type: map_at_1000
value: 43.230000000000004
- type: map_at_3
value: 38.306000000000004
- type: map_at_5
value: 40.355999999999995
- type: mrr_at_1
value: 37.536
- type: mrr_at_10
value: 47.643
- type: mrr_at_100
value: 48.508
- type: mrr_at_1000
value: 48.551
- type: mrr_at_3
value: 45.348
- type: mrr_at_5
value: 46.744
- type: ndcg_at_1
value: 37.536
- type: ndcg_at_10
value: 47.823
- type: ndcg_at_100
value: 53.395
- type: ndcg_at_1000
value: 55.271
- type: ndcg_at_3
value: 42.768
- type: ndcg_at_5
value: 45.373000000000005
- type: precision_at_1
value: 37.536
- type: precision_at_10
value: 8.681
- type: precision_at_100
value: 1.34
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 20.468
- type: precision_at_5
value: 14.495
- type: recall_at_1
value: 30.473
- type: recall_at_10
value: 60.092999999999996
- type: recall_at_100
value: 82.733
- type: recall_at_1000
value: 94.875
- type: recall_at_3
value: 45.734
- type: recall_at_5
value: 52.691
- type: map_at_1
value: 29.976000000000003
- type: map_at_10
value: 41.097
- type: map_at_100
value: 42.547000000000004
- type: map_at_1000
value: 42.659000000000006
- type: map_at_3
value: 37.251
- type: map_at_5
value: 39.493
- type: mrr_at_1
value: 37.557
- type: mrr_at_10
value: 46.605000000000004
- type: mrr_at_100
value: 47.487
- type: mrr_at_1000
value: 47.54
- type: mrr_at_3
value: 43.721
- type: mrr_at_5
value: 45.411
- type: ndcg_at_1
value: 37.557
- type: ndcg_at_10
value: 47.449000000000005
- type: ndcg_at_100
value: 53.052
- type: ndcg_at_1000
value: 55.010999999999996
- type: ndcg_at_3
value: 41.439
- type: ndcg_at_5
value: 44.292
- type: precision_at_1
value: 37.557
- type: precision_at_10
value: 8.847
- type: precision_at_100
value: 1.357
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 20.091
- type: precision_at_5
value: 14.384
- type: recall_at_1
value: 29.976000000000003
- type: recall_at_10
value: 60.99099999999999
- type: recall_at_100
value: 84.245
- type: recall_at_1000
value: 96.97200000000001
- type: recall_at_3
value: 43.794
- type: recall_at_5
value: 51.778999999999996
- type: map_at_1
value: 28.099166666666665
- type: map_at_10
value: 38.1365
- type: map_at_100
value: 39.44491666666667
- type: map_at_1000
value: 39.55858333333334
- type: map_at_3
value: 35.03641666666666
- type: map_at_5
value: 36.79833333333334
- type: mrr_at_1
value: 33.39966666666667
- type: mrr_at_10
value: 42.42583333333333
- type: mrr_at_100
value: 43.28575
- type: mrr_at_1000
value: 43.33741666666667
- type: mrr_at_3
value: 39.94975
- type: mrr_at_5
value: 41.41633333333334
- type: ndcg_at_1
value: 33.39966666666667
- type: ndcg_at_10
value: 43.81741666666667
- type: ndcg_at_100
value: 49.08166666666667
- type: ndcg_at_1000
value: 51.121166666666674
- type: ndcg_at_3
value: 38.73575
- type: ndcg_at_5
value: 41.18158333333333
- type: precision_at_1
value: 33.39966666666667
- type: precision_at_10
value: 7.738916666666667
- type: precision_at_100
value: 1.2265833333333331
- type: precision_at_1000
value: 0.15983333333333336
- type: precision_at_3
value: 17.967416666666665
- type: precision_at_5
value: 12.78675
- type: recall_at_1
value: 28.099166666666665
- type: recall_at_10
value: 56.27049999999999
- type: recall_at_100
value: 78.93291666666667
- type: recall_at_1000
value: 92.81608333333334
- type: recall_at_3
value: 42.09775
- type: recall_at_5
value: 48.42533333333334
- type: map_at_1
value: 23.663
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.426
- type: map_at_1000
value: 31.519000000000002
- type: map_at_3
value: 28.069
- type: map_at_5
value: 29.256999999999998
- type: mrr_at_1
value: 26.687
- type: mrr_at_10
value: 33.107
- type: mrr_at_100
value: 34.055
- type: mrr_at_1000
value: 34.117999999999995
- type: mrr_at_3
value: 31.058000000000003
- type: mrr_at_5
value: 32.14
- type: ndcg_at_1
value: 26.687
- type: ndcg_at_10
value: 34.615
- type: ndcg_at_100
value: 39.776
- type: ndcg_at_1000
value: 42.05
- type: ndcg_at_3
value: 30.322
- type: ndcg_at_5
value: 32.157000000000004
- type: precision_at_1
value: 26.687
- type: precision_at_10
value: 5.491
- type: precision_at_100
value: 0.877
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.139000000000001
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.663
- type: recall_at_10
value: 45.035
- type: recall_at_100
value: 68.554
- type: recall_at_1000
value: 85.077
- type: recall_at_3
value: 32.982
- type: recall_at_5
value: 37.688
- type: map_at_1
value: 17.403
- type: map_at_10
value: 25.197000000000003
- type: map_at_100
value: 26.355
- type: map_at_1000
value: 26.487
- type: map_at_3
value: 22.733
- type: map_at_5
value: 24.114
- type: mrr_at_1
value: 21.37
- type: mrr_at_10
value: 29.091
- type: mrr_at_100
value: 30.018
- type: mrr_at_1000
value: 30.096
- type: mrr_at_3
value: 26.887
- type: mrr_at_5
value: 28.157
- type: ndcg_at_1
value: 21.37
- type: ndcg_at_10
value: 30.026000000000003
- type: ndcg_at_100
value: 35.416
- type: ndcg_at_1000
value: 38.45
- type: ndcg_at_3
value: 25.764
- type: ndcg_at_5
value: 27.742
- type: precision_at_1
value: 21.37
- type: precision_at_10
value: 5.609
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 12.423
- type: precision_at_5
value: 9.009
- type: recall_at_1
value: 17.403
- type: recall_at_10
value: 40.573
- type: recall_at_100
value: 64.818
- type: recall_at_1000
value: 86.53699999999999
- type: recall_at_3
value: 28.493000000000002
- type: recall_at_5
value: 33.660000000000004
- type: map_at_1
value: 28.639
- type: map_at_10
value: 38.951
- type: map_at_100
value: 40.238
- type: map_at_1000
value: 40.327
- type: map_at_3
value: 35.842
- type: map_at_5
value: 37.617
- type: mrr_at_1
value: 33.769
- type: mrr_at_10
value: 43.088
- type: mrr_at_100
value: 44.03
- type: mrr_at_1000
value: 44.072
- type: mrr_at_3
value: 40.656
- type: mrr_at_5
value: 42.138999999999996
- type: ndcg_at_1
value: 33.769
- type: ndcg_at_10
value: 44.676
- type: ndcg_at_100
value: 50.416000000000004
- type: ndcg_at_1000
value: 52.227999999999994
- type: ndcg_at_3
value: 39.494
- type: ndcg_at_5
value: 42.013
- type: precision_at_1
value: 33.769
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.221
- type: precision_at_5
value: 12.966
- type: recall_at_1
value: 28.639
- type: recall_at_10
value: 57.687999999999995
- type: recall_at_100
value: 82.541
- type: recall_at_1000
value: 94.896
- type: recall_at_3
value: 43.651
- type: recall_at_5
value: 49.925999999999995
- type: map_at_1
value: 29.57
- type: map_at_10
value: 40.004
- type: map_at_100
value: 41.75
- type: map_at_1000
value: 41.97
- type: map_at_3
value: 36.788
- type: map_at_5
value: 38.671
- type: mrr_at_1
value: 35.375
- type: mrr_at_10
value: 45.121
- type: mrr_at_100
value: 45.994
- type: mrr_at_1000
value: 46.04
- type: mrr_at_3
value: 42.227
- type: mrr_at_5
value: 43.995
- type: ndcg_at_1
value: 35.375
- type: ndcg_at_10
value: 46.392
- type: ndcg_at_100
value: 52.196
- type: ndcg_at_1000
value: 54.274
- type: ndcg_at_3
value: 41.163
- type: ndcg_at_5
value: 43.813
- type: precision_at_1
value: 35.375
- type: precision_at_10
value: 8.676
- type: precision_at_100
value: 1.678
- type: precision_at_1000
value: 0.253
- type: precision_at_3
value: 19.104
- type: precision_at_5
value: 13.913
- type: recall_at_1
value: 29.57
- type: recall_at_10
value: 58.779
- type: recall_at_100
value: 83.337
- type: recall_at_1000
value: 95.979
- type: recall_at_3
value: 44.005
- type: recall_at_5
value: 50.975
- type: map_at_1
value: 20.832
- type: map_at_10
value: 29.733999999999998
- type: map_at_100
value: 30.727
- type: map_at_1000
value: 30.843999999999998
- type: map_at_3
value: 26.834999999999997
- type: map_at_5
value: 28.555999999999997
- type: mrr_at_1
value: 22.921
- type: mrr_at_10
value: 31.791999999999998
- type: mrr_at_100
value: 32.666000000000004
- type: mrr_at_1000
value: 32.751999999999995
- type: mrr_at_3
value: 29.144
- type: mrr_at_5
value: 30.622
- type: ndcg_at_1
value: 22.921
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.744
- type: ndcg_at_1000
value: 42.407000000000004
- type: ndcg_at_3
value: 29.421000000000003
- type: ndcg_at_5
value: 32.211
- type: precision_at_1
value: 22.921
- type: precision_at_10
value: 5.675
- type: precision_at_100
value: 0.872
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 12.753999999999998
- type: precision_at_5
value: 9.353
- type: recall_at_1
value: 20.832
- type: recall_at_10
value: 48.795
- type: recall_at_100
value: 70.703
- type: recall_at_1000
value: 90.187
- type: recall_at_3
value: 34.455000000000005
- type: recall_at_5
value: 40.967
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.334
- type: map_at_10
value: 19.009999999999998
- type: map_at_100
value: 21.129
- type: map_at_1000
value: 21.328
- type: map_at_3
value: 15.152
- type: map_at_5
value: 17.084
- type: mrr_at_1
value: 23.453
- type: mrr_at_10
value: 36.099
- type: mrr_at_100
value: 37.069
- type: mrr_at_1000
value: 37.104
- type: mrr_at_3
value: 32.096000000000004
- type: mrr_at_5
value: 34.451
- type: ndcg_at_1
value: 23.453
- type: ndcg_at_10
value: 27.739000000000004
- type: ndcg_at_100
value: 35.836
- type: ndcg_at_1000
value: 39.242
- type: ndcg_at_3
value: 21.263
- type: ndcg_at_5
value: 23.677
- type: precision_at_1
value: 23.453
- type: precision_at_10
value: 9.199
- type: precision_at_100
value: 1.791
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 16.2
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 10.334
- type: recall_at_10
value: 35.177
- type: recall_at_100
value: 63.009
- type: recall_at_1000
value: 81.938
- type: recall_at_3
value: 19.914
- type: recall_at_5
value: 26.077
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.212
- type: map_at_10
value: 17.386
- type: map_at_100
value: 24.234
- type: map_at_1000
value: 25.724999999999998
- type: map_at_3
value: 12.727
- type: map_at_5
value: 14.785
- type: mrr_at_1
value: 59.25
- type: mrr_at_10
value: 68.687
- type: mrr_at_100
value: 69.133
- type: mrr_at_1000
value: 69.14099999999999
- type: mrr_at_3
value: 66.917
- type: mrr_at_5
value: 67.742
- type: ndcg_at_1
value: 48.625
- type: ndcg_at_10
value: 36.675999999999995
- type: ndcg_at_100
value: 41.543
- type: ndcg_at_1000
value: 49.241
- type: ndcg_at_3
value: 41.373
- type: ndcg_at_5
value: 38.707
- type: precision_at_1
value: 59.25
- type: precision_at_10
value: 28.525
- type: precision_at_100
value: 9.027000000000001
- type: precision_at_1000
value: 1.8339999999999999
- type: precision_at_3
value: 44.833
- type: precision_at_5
value: 37.35
- type: recall_at_1
value: 8.212
- type: recall_at_10
value: 23.188
- type: recall_at_100
value: 48.613
- type: recall_at_1000
value: 73.093
- type: recall_at_3
value: 14.419
- type: recall_at_5
value: 17.798
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 52.725
- type: f1
value: 46.50743309855908
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.086
- type: map_at_10
value: 66.914
- type: map_at_100
value: 67.321
- type: map_at_1000
value: 67.341
- type: map_at_3
value: 64.75800000000001
- type: map_at_5
value: 66.189
- type: mrr_at_1
value: 59.28600000000001
- type: mrr_at_10
value: 71.005
- type: mrr_at_100
value: 71.304
- type: mrr_at_1000
value: 71.313
- type: mrr_at_3
value: 69.037
- type: mrr_at_5
value: 70.35
- type: ndcg_at_1
value: 59.28600000000001
- type: ndcg_at_10
value: 72.695
- type: ndcg_at_100
value: 74.432
- type: ndcg_at_1000
value: 74.868
- type: ndcg_at_3
value: 68.72200000000001
- type: ndcg_at_5
value: 71.081
- type: precision_at_1
value: 59.28600000000001
- type: precision_at_10
value: 9.499
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 27.503
- type: precision_at_5
value: 17.854999999999997
- type: recall_at_1
value: 55.086
- type: recall_at_10
value: 86.453
- type: recall_at_100
value: 94.028
- type: recall_at_1000
value: 97.052
- type: recall_at_3
value: 75.821
- type: recall_at_5
value: 81.6
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.262999999999998
- type: map_at_10
value: 37.488
- type: map_at_100
value: 39.498
- type: map_at_1000
value: 39.687
- type: map_at_3
value: 32.529
- type: map_at_5
value: 35.455
- type: mrr_at_1
value: 44.907000000000004
- type: mrr_at_10
value: 53.239000000000004
- type: mrr_at_100
value: 54.086
- type: mrr_at_1000
value: 54.122
- type: mrr_at_3
value: 51.235
- type: mrr_at_5
value: 52.415
- type: ndcg_at_1
value: 44.907000000000004
- type: ndcg_at_10
value: 45.446
- type: ndcg_at_100
value: 52.429
- type: ndcg_at_1000
value: 55.169000000000004
- type: ndcg_at_3
value: 41.882000000000005
- type: ndcg_at_5
value: 43.178
- type: precision_at_1
value: 44.907000000000004
- type: precision_at_10
value: 12.931999999999999
- type: precision_at_100
value: 2.025
- type: precision_at_1000
value: 0.248
- type: precision_at_3
value: 28.652
- type: precision_at_5
value: 21.204
- type: recall_at_1
value: 22.262999999999998
- type: recall_at_10
value: 52.447
- type: recall_at_100
value: 78.045
- type: recall_at_1000
value: 94.419
- type: recall_at_3
value: 38.064
- type: recall_at_5
value: 44.769
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.519
- type: map_at_10
value: 45.831
- type: map_at_100
value: 46.815
- type: map_at_1000
value: 46.899
- type: map_at_3
value: 42.836
- type: map_at_5
value: 44.65
- type: mrr_at_1
value: 65.037
- type: mrr_at_10
value: 72.16
- type: mrr_at_100
value: 72.51100000000001
- type: mrr_at_1000
value: 72.53
- type: mrr_at_3
value: 70.682
- type: mrr_at_5
value: 71.54599999999999
- type: ndcg_at_1
value: 65.037
- type: ndcg_at_10
value: 55.17999999999999
- type: ndcg_at_100
value: 58.888
- type: ndcg_at_1000
value: 60.648
- type: ndcg_at_3
value: 50.501
- type: ndcg_at_5
value: 52.977
- type: precision_at_1
value: 65.037
- type: precision_at_10
value: 11.530999999999999
- type: precision_at_100
value: 1.4460000000000002
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 31.483
- type: precision_at_5
value: 20.845
- type: recall_at_1
value: 32.519
- type: recall_at_10
value: 57.657000000000004
- type: recall_at_100
value: 72.30199999999999
- type: recall_at_1000
value: 84.024
- type: recall_at_3
value: 47.225
- type: recall_at_5
value: 52.113
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 88.3168
- type: ap
value: 83.80165516037135
- type: f1
value: 88.29942471066407
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 20.724999999999998
- type: map_at_10
value: 32.736
- type: map_at_100
value: 33.938
- type: map_at_1000
value: 33.991
- type: map_at_3
value: 28.788000000000004
- type: map_at_5
value: 31.016
- type: mrr_at_1
value: 21.361
- type: mrr_at_10
value: 33.323
- type: mrr_at_100
value: 34.471000000000004
- type: mrr_at_1000
value: 34.518
- type: mrr_at_3
value: 29.453000000000003
- type: mrr_at_5
value: 31.629
- type: ndcg_at_1
value: 21.361
- type: ndcg_at_10
value: 39.649
- type: ndcg_at_100
value: 45.481
- type: ndcg_at_1000
value: 46.775
- type: ndcg_at_3
value: 31.594
- type: ndcg_at_5
value: 35.543
- type: precision_at_1
value: 21.361
- type: precision_at_10
value: 6.3740000000000006
- type: precision_at_100
value: 0.931
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.514999999999999
- type: precision_at_5
value: 10.100000000000001
- type: recall_at_1
value: 20.724999999999998
- type: recall_at_10
value: 61.034
- type: recall_at_100
value: 88.062
- type: recall_at_1000
value: 97.86399999999999
- type: recall_at_3
value: 39.072
- type: recall_at_5
value: 48.53
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.8919288645691
- type: f1
value: 93.57059586398059
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.97993616051072
- type: f1
value: 48.244319183606535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.90047074646941
- type: f1
value: 66.48999056063725
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.34566240753195
- type: f1
value: 73.54164154290658
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.21866934757011
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.000936217235534
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.68189362520352
- type: mrr
value: 32.69603637784303
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.078
- type: map_at_10
value: 12.671
- type: map_at_100
value: 16.291
- type: map_at_1000
value: 17.855999999999998
- type: map_at_3
value: 9.610000000000001
- type: map_at_5
value: 11.152
- type: mrr_at_1
value: 43.963
- type: mrr_at_10
value: 53.173
- type: mrr_at_100
value: 53.718999999999994
- type: mrr_at_1000
value: 53.756
- type: mrr_at_3
value: 50.980000000000004
- type: mrr_at_5
value: 52.42
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.086
- type: ndcg_at_100
value: 32.545
- type: ndcg_at_1000
value: 41.144999999999996
- type: ndcg_at_3
value: 39.434999999999995
- type: ndcg_at_5
value: 37.888
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.014999999999997
- type: precision_at_100
value: 8.594
- type: precision_at_1000
value: 2.169
- type: precision_at_3
value: 37.049
- type: precision_at_5
value: 33.065
- type: recall_at_1
value: 6.078
- type: recall_at_10
value: 16.17
- type: recall_at_100
value: 34.512
- type: recall_at_1000
value: 65.447
- type: recall_at_3
value: 10.706
- type: recall_at_5
value: 13.158
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.378000000000004
- type: map_at_10
value: 42.178
- type: map_at_100
value: 43.32
- type: map_at_1000
value: 43.358000000000004
- type: map_at_3
value: 37.474000000000004
- type: map_at_5
value: 40.333000000000006
- type: mrr_at_1
value: 30.823
- type: mrr_at_10
value: 44.626
- type: mrr_at_100
value: 45.494
- type: mrr_at_1000
value: 45.519
- type: mrr_at_3
value: 40.585
- type: mrr_at_5
value: 43.146
- type: ndcg_at_1
value: 30.794
- type: ndcg_at_10
value: 50.099000000000004
- type: ndcg_at_100
value: 54.900999999999996
- type: ndcg_at_1000
value: 55.69499999999999
- type: ndcg_at_3
value: 41.238
- type: ndcg_at_5
value: 46.081
- type: precision_at_1
value: 30.794
- type: precision_at_10
value: 8.549
- type: precision_at_100
value: 1.124
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 18.926000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 27.378000000000004
- type: recall_at_10
value: 71.842
- type: recall_at_100
value: 92.565
- type: recall_at_1000
value: 98.402
- type: recall_at_3
value: 49.053999999999995
- type: recall_at_5
value: 60.207
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.557
- type: map_at_10
value: 84.729
- type: map_at_100
value: 85.369
- type: map_at_1000
value: 85.382
- type: map_at_3
value: 81.72
- type: map_at_5
value: 83.613
- type: mrr_at_1
value: 81.3
- type: mrr_at_10
value: 87.488
- type: mrr_at_100
value: 87.588
- type: mrr_at_1000
value: 87.589
- type: mrr_at_3
value: 86.53
- type: mrr_at_5
value: 87.18599999999999
- type: ndcg_at_1
value: 81.28999999999999
- type: ndcg_at_10
value: 88.442
- type: ndcg_at_100
value: 89.637
- type: ndcg_at_1000
value: 89.70700000000001
- type: ndcg_at_3
value: 85.55199999999999
- type: ndcg_at_5
value: 87.154
- type: precision_at_1
value: 81.28999999999999
- type: precision_at_10
value: 13.489999999999998
- type: precision_at_100
value: 1.54
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.553
- type: precision_at_5
value: 24.708
- type: recall_at_1
value: 70.557
- type: recall_at_10
value: 95.645
- type: recall_at_100
value: 99.693
- type: recall_at_1000
value: 99.995
- type: recall_at_3
value: 87.359
- type: recall_at_5
value: 91.89699999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 63.65060114776209
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.63271250680617
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.263
- type: map_at_10
value: 10.801
- type: map_at_100
value: 12.888
- type: map_at_1000
value: 13.224
- type: map_at_3
value: 7.362
- type: map_at_5
value: 9.149000000000001
- type: mrr_at_1
value: 21
- type: mrr_at_10
value: 31.416
- type: mrr_at_100
value: 32.513
- type: mrr_at_1000
value: 32.58
- type: mrr_at_3
value: 28.116999999999997
- type: mrr_at_5
value: 29.976999999999997
- type: ndcg_at_1
value: 21
- type: ndcg_at_10
value: 18.551000000000002
- type: ndcg_at_100
value: 26.657999999999998
- type: ndcg_at_1000
value: 32.485
- type: ndcg_at_3
value: 16.834
- type: ndcg_at_5
value: 15.204999999999998
- type: precision_at_1
value: 21
- type: precision_at_10
value: 9.84
- type: precision_at_100
value: 2.16
- type: precision_at_1000
value: 0.35500000000000004
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 13.62
- type: recall_at_1
value: 4.263
- type: recall_at_10
value: 19.922
- type: recall_at_100
value: 43.808
- type: recall_at_1000
value: 72.14500000000001
- type: recall_at_3
value: 9.493
- type: recall_at_5
value: 13.767999999999999
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_spearman
value: 81.27446313317233
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_spearman
value: 76.27963301217527
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_spearman
value: 88.18495048450949
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_spearman
value: 81.91982338692046
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_spearman
value: 89.00896818385291
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_spearman
value: 85.48814644586132
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_spearman
value: 90.30116926966582
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_spearman
value: 67.74132963032342
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_spearman
value: 86.87741355780479
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.0019012295875
- type: mrr
value: 94.70267024188593
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 50.05
- type: map_at_10
value: 59.36
- type: map_at_100
value: 59.967999999999996
- type: map_at_1000
value: 60.023
- type: map_at_3
value: 56.515
- type: map_at_5
value: 58.272999999999996
- type: mrr_at_1
value: 53
- type: mrr_at_10
value: 61.102000000000004
- type: mrr_at_100
value: 61.476
- type: mrr_at_1000
value: 61.523
- type: mrr_at_3
value: 58.778
- type: mrr_at_5
value: 60.128
- type: ndcg_at_1
value: 53
- type: ndcg_at_10
value: 64.43100000000001
- type: ndcg_at_100
value: 66.73599999999999
- type: ndcg_at_1000
value: 68.027
- type: ndcg_at_3
value: 59.279
- type: ndcg_at_5
value: 61.888
- type: precision_at_1
value: 53
- type: precision_at_10
value: 8.767
- type: precision_at_100
value: 1.01
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 23.444000000000003
- type: precision_at_5
value: 15.667
- type: recall_at_1
value: 50.05
- type: recall_at_10
value: 78.511
- type: recall_at_100
value: 88.5
- type: recall_at_1000
value: 98.333
- type: recall_at_3
value: 64.117
- type: recall_at_5
value: 70.867
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72178217821782
- type: cos_sim_ap
value: 93.0728601593541
- type: cos_sim_f1
value: 85.6727976766699
- type: cos_sim_precision
value: 83.02063789868667
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.72178217821782
- type: dot_ap
value: 93.07287396168348
- type: dot_f1
value: 85.6727976766699
- type: dot_precision
value: 83.02063789868667
- type: dot_recall
value: 88.5
- type: euclidean_accuracy
value: 99.72178217821782
- type: euclidean_ap
value: 93.07285657982895
- type: euclidean_f1
value: 85.6727976766699
- type: euclidean_precision
value: 83.02063789868667
- type: euclidean_recall
value: 88.5
- type: manhattan_accuracy
value: 99.72475247524753
- type: manhattan_ap
value: 93.02792973059809
- type: manhattan_f1
value: 85.7727737973388
- type: manhattan_precision
value: 87.84067085953879
- type: manhattan_recall
value: 83.8
- type: max_accuracy
value: 99.72475247524753
- type: max_ap
value: 93.07287396168348
- type: max_f1
value: 85.7727737973388
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 68.77583615550819
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.151636938606956
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.16607939471187
- type: mrr
value: 52.95172046091163
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.314646669495666
- type: cos_sim_spearman
value: 31.83562491439455
- type: dot_pearson
value: 31.314590842874157
- type: dot_spearman
value: 31.83363065810437
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.198
- type: map_at_10
value: 1.3010000000000002
- type: map_at_100
value: 7.2139999999999995
- type: map_at_1000
value: 20.179
- type: map_at_3
value: 0.528
- type: map_at_5
value: 0.8019999999999999
- type: mrr_at_1
value: 72
- type: mrr_at_10
value: 83.39999999999999
- type: mrr_at_100
value: 83.39999999999999
- type: mrr_at_1000
value: 83.39999999999999
- type: mrr_at_3
value: 81.667
- type: mrr_at_5
value: 83.06700000000001
- type: ndcg_at_1
value: 66
- type: ndcg_at_10
value: 58.059000000000005
- type: ndcg_at_100
value: 44.316
- type: ndcg_at_1000
value: 43.147000000000006
- type: ndcg_at_3
value: 63.815999999999995
- type: ndcg_at_5
value: 63.005
- type: precision_at_1
value: 72
- type: precision_at_10
value: 61.4
- type: precision_at_100
value: 45.62
- type: precision_at_1000
value: 19.866
- type: precision_at_3
value: 70
- type: precision_at_5
value: 68.8
- type: recall_at_1
value: 0.198
- type: recall_at_10
value: 1.517
- type: recall_at_100
value: 10.587
- type: recall_at_1000
value: 41.233
- type: recall_at_3
value: 0.573
- type: recall_at_5
value: 0.907
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 1.894
- type: map_at_10
value: 8.488999999999999
- type: map_at_100
value: 14.445
- type: map_at_1000
value: 16.078
- type: map_at_3
value: 4.589
- type: map_at_5
value: 6.019
- type: mrr_at_1
value: 22.448999999999998
- type: mrr_at_10
value: 39.82
- type: mrr_at_100
value: 40.752
- type: mrr_at_1000
value: 40.771
- type: mrr_at_3
value: 34.354
- type: mrr_at_5
value: 37.721
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 21.563
- type: ndcg_at_100
value: 33.857
- type: ndcg_at_1000
value: 46.199
- type: ndcg_at_3
value: 22.296
- type: ndcg_at_5
value: 21.770999999999997
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 19.796
- type: precision_at_100
value: 7.142999999999999
- type: precision_at_1000
value: 1.541
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 22.448999999999998
- type: recall_at_1
value: 1.894
- type: recall_at_10
value: 14.931
- type: recall_at_100
value: 45.524
- type: recall_at_1000
value: 83.243
- type: recall_at_3
value: 5.712
- type: recall_at_5
value: 8.386000000000001
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.049
- type: ap
value: 13.85116971310922
- type: f1
value: 54.37504302487686
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.1312959818902
- type: f1
value: 64.11413877009383
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 54.13103431861502
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.327889372355
- type: cos_sim_ap
value: 77.42059895975699
- type: cos_sim_f1
value: 71.02706903250873
- type: cos_sim_precision
value: 69.75324344950394
- type: cos_sim_recall
value: 72.34828496042216
- type: dot_accuracy
value: 87.327889372355
- type: dot_ap
value: 77.4209479346677
- type: dot_f1
value: 71.02706903250873
- type: dot_precision
value: 69.75324344950394
- type: dot_recall
value: 72.34828496042216
- type: euclidean_accuracy
value: 87.327889372355
- type: euclidean_ap
value: 77.42096495861037
- type: euclidean_f1
value: 71.02706903250873
- type: euclidean_precision
value: 69.75324344950394
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.31000774870358
- type: manhattan_ap
value: 77.38930750711619
- type: manhattan_f1
value: 71.07935314027831
- type: manhattan_precision
value: 67.70957726295677
- type: manhattan_recall
value: 74.80211081794195
- type: max_accuracy
value: 87.327889372355
- type: max_ap
value: 77.42096495861037
- type: max_f1
value: 71.07935314027831
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.58939729110878
- type: cos_sim_ap
value: 87.17594155025475
- type: cos_sim_f1
value: 79.21146953405018
- type: cos_sim_precision
value: 76.8918527109307
- type: cos_sim_recall
value: 81.67539267015707
- type: dot_accuracy
value: 89.58939729110878
- type: dot_ap
value: 87.17593963273593
- type: dot_f1
value: 79.21146953405018
- type: dot_precision
value: 76.8918527109307
- type: dot_recall
value: 81.67539267015707
- type: euclidean_accuracy
value: 89.58939729110878
- type: euclidean_ap
value: 87.17592466925834
- type: euclidean_f1
value: 79.21146953405018
- type: euclidean_precision
value: 76.8918527109307
- type: euclidean_recall
value: 81.67539267015707
- type: manhattan_accuracy
value: 89.62626615438352
- type: manhattan_ap
value: 87.16589873161546
- type: manhattan_f1
value: 79.25143598295348
- type: manhattan_precision
value: 76.39494177323712
- type: manhattan_recall
value: 82.32984293193716
- type: max_accuracy
value: 89.62626615438352
- type: max_ap
value: 87.17594155025475
- type: max_f1
value: 79.25143598295348
---
# hkunlp/instructor-large
We introduce **Instructor**👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨 achieves sota on 70 diverse embedding tasks ([MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard))!
The model is easy to use with **our customized** `sentence-transformer` library. For more details, check out [our paper](https://arxiv.org/abs/2212.09741) and [project page](https://instructor-embedding.github.io/)!
**************************** **Updates** ****************************
* 12/28: We released a new [checkpoint](https://huggingface.co/hkunlp/instructor-large) trained with hard negatives, which gives better performance.
* 12/21: We released our [paper](https://arxiv.org/abs/2212.09741), [code](https://github.com/HKUNLP/instructor-embedding), [checkpoint](https://huggingface.co/hkunlp/instructor-large) and [project page](https://instructor-embedding.github.io/)! Check them out!
## Quick start
<hr />
## Installation
```bash
pip install InstructorEmbedding
```
## Compute your customized embeddings
Then you can use the model like this to calculate domain-specific and task-aware embeddings:
```python
from InstructorEmbedding import INSTRUCTOR
model = INSTRUCTOR('hkunlp/instructor-large')
sentence = "3D ActionSLAM: wearable person tracking in multi-floor environments"
instruction = "Represent the Science title:"
embeddings = model.encode([[instruction,sentence]])
print(embeddings)
```
## Use cases
<hr />
## Calculate embeddings for your customized texts
If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions:
Represent the `domain` `text_type` for `task_objective`:
* `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
* `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
* `task_objective` is optional, and it specifies the objective of embedding, e.g., retrieve a document, classify the sentence, etc.
## Calculate Sentence similarities
You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
```python
from sklearn.metrics.pairwise import cosine_similarity
sentences_a = [['Represent the Science sentence: ','Parton energy loss in QCD matter'],
['Represent the Financial statement: ','The Federal Reserve on Wednesday raised its benchmark interest rate.']]
sentences_b = [['Represent the Science sentence: ','The Chiral Phase Transition in Dissipative Dynamics'],
['Represent the Financial statement: ','The funds rose less than 0.5 per cent on Friday']]
embeddings_a = model.encode(sentences_a)
embeddings_b = model.encode(sentences_b)
similarities = cosine_similarity(embeddings_a,embeddings_b)
print(similarities)
```
## Information Retrieval
You can also use **customized embeddings** for information retrieval.
```python
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
query = [['Represent the Wikipedia question for retrieving supporting documents: ','where is the food stored in a yam plant']]
corpus = [['Represent the Wikipedia document for retrieval: ','Capitalism has been dominant in the Western world since the end of feudalism, but most feel[who?] that the term "mixed economies" more precisely describes most contemporary economies, due to their containing both private-owned and state-owned enterprises. In capitalism, prices determine the demand-supply scale. For example, higher demand for certain goods and services lead to higher prices and lower demand for certain goods lead to lower prices.'],
['Represent the Wikipedia document for retrieval: ',"The disparate impact theory is especially controversial under the Fair Housing Act because the Act regulates many activities relating to housing, insurance, and mortgage loans—and some scholars have argued that the theory's use under the Fair Housing Act, combined with extensions of the Community Reinvestment Act, contributed to rise of sub-prime lending and the crash of the U.S. housing market and ensuing global economic recession"],
['Represent the Wikipedia document for retrieval: ','Disparate impact in United States labor law refers to practices in employment, housing, and other areas that adversely affect one group of people of a protected characteristic more than another, even though rules applied by employers or landlords are formally neutral. Although the protected classes vary by statute, most federal civil rights laws protect based on race, color, religion, national origin, and sex as protected traits, and some laws include disability status and other traits as well.']]
query_embeddings = model.encode(query)
corpus_embeddings = model.encode(corpus)
similarities = cosine_similarity(query_embeddings,corpus_embeddings)
retrieved_doc_id = np.argmax(similarities)
print(retrieved_doc_id)
```
## Clustering
Use **customized embeddings** for clustering texts in groups.
```python
import sklearn.cluster
sentences = [['Represent the Medicine sentence for clustering: ','Dynamical Scalar Degree of Freedom in Horava-Lifshitz Gravity'],
['Represent the Medicine sentence for clustering: ','Comparison of Atmospheric Neutrino Flux Calculations at Low Energies'],
['Represent the Medicine sentence for clustering: ','Fermion Bags in the Massive Gross-Neveu Model'],
['Represent the Medicine sentence for clustering: ',"QCD corrections to Associated t-tbar-H production at the Tevatron"],
['Represent the Medicine sentence for clustering: ','A New Analysis of the R Measurements: Resonance Parameters of the Higher, Vector States of Charmonium']]
embeddings = model.encode(sentences)
clustering_model = sklearn.cluster.MiniBatchKMeans(n_clusters=2)
clustering_model.fit(embeddings)
cluster_assignment = clustering_model.labels_
print(cluster_assignment)
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Alignment-Lab-AI/e5-mistral-7b-instruct | Alignment-Lab-AI | feature-extraction | [
"sentence-transformers",
"pytorch",
"safetensors",
"mistral",
"feature-extraction",
"mteb",
"transformers",
"en",
"arxiv:2401.00368",
"arxiv:2104.08663",
"arxiv:2210.07316",
"arxiv:2212.03533",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,734 | 1,734 | 81 | 0 | ---
language:
- en
license: mit
tags:
- mteb
- sentence-transformers
- transformers
model-index:
- name: e5-mistral-7b-instruct
results:
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: None
metrics:
- type: cos_sim_pearson
value: 37.863226091673866
- type: cos_sim_spearman
value: 38.98733013335281
- type: euclidean_pearson
value: 37.51783380497874
- type: euclidean_spearman
value: 38.98733012753365
- type: manhattan_pearson
value: 37.26706888081721
- type: manhattan_spearman
value: 38.709750161903834
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 43.33924583134623
- type: cos_sim_spearman
value: 42.84316155158754
- type: euclidean_pearson
value: 45.62709879515238
- type: euclidean_spearman
value: 42.843155921732404
- type: manhattan_pearson
value: 45.4786950991229
- type: manhattan_spearman
value: 42.657334751855984
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.68656716417911
- type: ap
value: 41.71522322900398
- type: f1
value: 72.37207703532552
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.04710920770879
- type: ap
value: 83.42622221864045
- type: f1
value: 72.14388257905772
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.93103448275862
- type: ap
value: 26.039284760509513
- type: f1
value: 64.81092954450712
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.21627408993577
- type: ap
value: 24.876490553983036
- type: f1
value: 63.8773359684989
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 95.90679999999999
- type: ap
value: 94.32357863164454
- type: f1
value: 95.90485634708557
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.786
- type: f1
value: 55.31211995815146
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.26
- type: f1
value: 52.156230111544986
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 50.33
- type: f1
value: 49.195023008878145
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.3
- type: f1
value: 48.434470184108
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.68599999999999
- type: f1
value: 47.62681775202072
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.238
- type: f1
value: 45.014030559653705
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 53.076
- type: map_at_100
value: 53.657999999999994
- type: map_at_1000
value: 53.659
- type: map_at_3
value: 48.234
- type: map_at_5
value: 51.121
- type: mrr_at_1
value: 37.269000000000005
- type: mrr_at_10
value: 53.335
- type: mrr_at_100
value: 53.916
- type: mrr_at_1000
value: 53.918
- type: mrr_at_3
value: 48.518
- type: mrr_at_5
value: 51.406
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 61.882000000000005
- type: ndcg_at_100
value: 64.165
- type: ndcg_at_1000
value: 64.203
- type: ndcg_at_3
value: 52.049
- type: ndcg_at_5
value: 57.199
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 8.982999999999999
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 21.029
- type: precision_at_5
value: 15.092
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 89.82900000000001
- type: recall_at_100
value: 99.36
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 63.087
- type: recall_at_5
value: 75.46199999999999
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.45119266859667
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 45.4958298992051
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 66.98177472838887
- type: mrr
value: 79.91854636591478
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.67086498650698
- type: cos_sim_spearman
value: 85.54773239564638
- type: euclidean_pearson
value: 86.48229161588425
- type: euclidean_spearman
value: 85.54773239564638
- type: manhattan_pearson
value: 86.67533327742343
- type: manhattan_spearman
value: 85.76099026691983
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 50.31998888922809
- type: cos_sim_spearman
value: 50.6369940530675
- type: euclidean_pearson
value: 50.055544636296055
- type: euclidean_spearman
value: 50.63699405154838
- type: manhattan_pearson
value: 50.00739378036807
- type: manhattan_spearman
value: 50.607237418676945
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.5615866388309
- type: f1
value: 99.49895615866389
- type: precision
value: 99.46764091858039
- type: recall
value: 99.5615866388309
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.19656614571869
- type: f1
value: 99.08650671362535
- type: precision
value: 99.0314769975787
- type: recall
value: 99.19656614571869
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.0256321440942
- type: f1
value: 97.83743216718624
- type: precision
value: 97.74390947927492
- type: recall
value: 98.0256321440942
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26276987888363
- type: f1
value: 99.22766368264
- type: precision
value: 99.21011058451816
- type: recall
value: 99.26276987888363
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 88.22727272727272
- type: f1
value: 88.17411732496673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.530637846246975
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 40.23505728593893
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 44.419028279451275
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 42.5820277929776
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 77.67811726152972
- type: mrr
value: 80.99003968253969
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: None
metrics:
- type: map
value: 78.66055354534922
- type: mrr
value: 81.66119047619047
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.162333333333333
- type: map_at_10
value: 37.22291666666667
- type: map_at_100
value: 38.56733333333333
- type: map_at_1000
value: 38.684250000000006
- type: map_at_3
value: 34.22858333333333
- type: map_at_5
value: 35.852500000000006
- type: mrr_at_1
value: 32.459833333333336
- type: mrr_at_10
value: 41.65358333333333
- type: mrr_at_100
value: 42.566916666666664
- type: mrr_at_1000
value: 42.61766666666667
- type: mrr_at_3
value: 39.210499999999996
- type: mrr_at_5
value: 40.582166666666666
- type: ndcg_at_1
value: 32.459833333333336
- type: ndcg_at_10
value: 42.96758333333333
- type: ndcg_at_100
value: 48.5065
- type: ndcg_at_1000
value: 50.556583333333336
- type: ndcg_at_3
value: 38.004416666666664
- type: ndcg_at_5
value: 40.25916666666667
- type: precision_at_1
value: 32.459833333333336
- type: precision_at_10
value: 7.664583333333333
- type: precision_at_100
value: 1.2349999999999999
- type: precision_at_1000
value: 0.15966666666666668
- type: precision_at_3
value: 17.731166666666663
- type: precision_at_5
value: 12.575333333333335
- type: recall_at_1
value: 27.162333333333333
- type: recall_at_10
value: 55.44158333333334
- type: recall_at_100
value: 79.56966666666666
- type: recall_at_1000
value: 93.45224999999999
- type: recall_at_3
value: 41.433083333333336
- type: recall_at_5
value: 47.31108333333333
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.539
- type: map_at_10
value: 28.494999999999997
- type: map_at_100
value: 30.568
- type: map_at_1000
value: 30.741000000000003
- type: map_at_3
value: 23.846999999999998
- type: map_at_5
value: 26.275
- type: mrr_at_1
value: 37.394
- type: mrr_at_10
value: 50.068
- type: mrr_at_100
value: 50.727
- type: mrr_at_1000
value: 50.751000000000005
- type: mrr_at_3
value: 46.938
- type: mrr_at_5
value: 48.818
- type: ndcg_at_1
value: 37.394
- type: ndcg_at_10
value: 38.349
- type: ndcg_at_100
value: 45.512
- type: ndcg_at_1000
value: 48.321
- type: ndcg_at_3
value: 32.172
- type: ndcg_at_5
value: 34.265
- type: precision_at_1
value: 37.394
- type: precision_at_10
value: 11.927999999999999
- type: precision_at_100
value: 1.966
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 24.126
- type: precision_at_5
value: 18.306
- type: recall_at_1
value: 16.539
- type: recall_at_10
value: 44.504
- type: recall_at_100
value: 68.605
- type: recall_at_1000
value: 84.1
- type: recall_at_3
value: 29.008
- type: recall_at_5
value: 35.58
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 19.482
- type: map_at_10
value: 28.622999999999998
- type: map_at_100
value: 30.262
- type: map_at_1000
value: 30.432
- type: map_at_3
value: 25.647
- type: map_at_5
value: 27.128000000000004
- type: mrr_at_1
value: 30.408
- type: mrr_at_10
value: 37.188
- type: mrr_at_100
value: 38.196000000000005
- type: mrr_at_1000
value: 38.273
- type: mrr_at_3
value: 35.067
- type: mrr_at_5
value: 36.124
- type: ndcg_at_1
value: 30.408
- type: ndcg_at_10
value: 34.215
- type: ndcg_at_100
value: 41.349999999999994
- type: ndcg_at_1000
value: 44.689
- type: ndcg_at_3
value: 30.264999999999997
- type: ndcg_at_5
value: 31.572
- type: precision_at_1
value: 30.408
- type: precision_at_10
value: 7.6770000000000005
- type: precision_at_100
value: 1.352
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 17.213
- type: precision_at_5
value: 12.198
- type: recall_at_1
value: 19.482
- type: recall_at_10
value: 42.368
- type: recall_at_100
value: 72.694
- type: recall_at_1000
value: 95.602
- type: recall_at_3
value: 30.101
- type: recall_at_5
value: 34.708
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 71.16055321707758
- type: cos_sim_ap
value: 80.21073839711723
- type: cos_sim_f1
value: 72.9740932642487
- type: cos_sim_precision
value: 65.53136050623488
- type: cos_sim_recall
value: 82.3240589198036
- type: dot_accuracy
value: 71.16055321707758
- type: dot_ap
value: 80.212299264122
- type: dot_f1
value: 72.9740932642487
- type: dot_precision
value: 65.53136050623488
- type: dot_recall
value: 82.3240589198036
- type: euclidean_accuracy
value: 71.16055321707758
- type: euclidean_ap
value: 80.21076298680417
- type: euclidean_f1
value: 72.9740932642487
- type: euclidean_precision
value: 65.53136050623488
- type: euclidean_recall
value: 82.3240589198036
- type: manhattan_accuracy
value: 70.71557426337944
- type: manhattan_ap
value: 79.93448977199749
- type: manhattan_f1
value: 72.83962726826877
- type: manhattan_precision
value: 62.7407908077053
- type: manhattan_recall
value: 86.81318681318682
- type: max_accuracy
value: 71.16055321707758
- type: max_ap
value: 80.212299264122
- type: max_f1
value: 72.9740932642487
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 60.643
- type: map_at_10
value: 69.011
- type: map_at_100
value: 69.533
- type: map_at_1000
value: 69.545
- type: map_at_3
value: 67.167
- type: map_at_5
value: 68.12700000000001
- type: mrr_at_1
value: 60.801
- type: mrr_at_10
value: 69.111
- type: mrr_at_100
value: 69.6
- type: mrr_at_1000
value: 69.611
- type: mrr_at_3
value: 67.229
- type: mrr_at_5
value: 68.214
- type: ndcg_at_1
value: 60.801
- type: ndcg_at_10
value: 73.128
- type: ndcg_at_100
value: 75.614
- type: ndcg_at_1000
value: 75.92
- type: ndcg_at_3
value: 69.261
- type: ndcg_at_5
value: 70.973
- type: precision_at_1
value: 60.801
- type: precision_at_10
value: 8.662
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 25.149
- type: precision_at_5
value: 15.953999999999999
- type: recall_at_1
value: 60.643
- type: recall_at_10
value: 85.959
- type: recall_at_100
value: 97.576
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 75.184
- type: recall_at_5
value: 79.32000000000001
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.183
- type: map_at_10
value: 23.958
- type: map_at_100
value: 34.354
- type: map_at_1000
value: 36.442
- type: map_at_3
value: 16.345000000000002
- type: map_at_5
value: 19.647000000000002
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 80.976
- type: mrr_at_100
value: 81.256
- type: mrr_at_1000
value: 81.262
- type: mrr_at_3
value: 79.958
- type: mrr_at_5
value: 80.37100000000001
- type: ndcg_at_1
value: 62.0
- type: ndcg_at_10
value: 48.894999999999996
- type: ndcg_at_100
value: 53.867
- type: ndcg_at_1000
value: 61.304
- type: ndcg_at_3
value: 53.688
- type: ndcg_at_5
value: 50.900999999999996
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 39.525
- type: precision_at_100
value: 12.323
- type: precision_at_1000
value: 2.539
- type: precision_at_3
value: 57.49999999999999
- type: precision_at_5
value: 49.1
- type: recall_at_1
value: 10.183
- type: recall_at_10
value: 29.296
- type: recall_at_100
value: 60.394999999999996
- type: recall_at_1000
value: 83.12
- type: recall_at_3
value: 17.495
- type: recall_at_5
value: 22.235
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 26.613999999999997
- type: map_at_10
value: 79.77300000000001
- type: map_at_100
value: 82.71
- type: map_at_1000
value: 82.75
- type: map_at_3
value: 55.92700000000001
- type: map_at_5
value: 70.085
- type: mrr_at_1
value: 90.7
- type: mrr_at_10
value: 93.438
- type: mrr_at_100
value: 93.504
- type: mrr_at_1000
value: 93.50699999999999
- type: mrr_at_3
value: 93.125
- type: mrr_at_5
value: 93.34
- type: ndcg_at_1
value: 90.7
- type: ndcg_at_10
value: 87.023
- type: ndcg_at_100
value: 90.068
- type: ndcg_at_1000
value: 90.43299999999999
- type: ndcg_at_3
value: 86.339
- type: ndcg_at_5
value: 85.013
- type: precision_at_1
value: 90.7
- type: precision_at_10
value: 41.339999999999996
- type: precision_at_100
value: 4.806
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 76.983
- type: precision_at_5
value: 64.69
- type: recall_at_1
value: 26.613999999999997
- type: recall_at_10
value: 87.681
- type: recall_at_100
value: 97.44699999999999
- type: recall_at_1000
value: 99.348
- type: recall_at_3
value: 57.809999999999995
- type: recall_at_5
value: 74.258
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 30.9
- type: map_at_10
value: 40.467
- type: map_at_100
value: 41.423
- type: map_at_1000
value: 41.463
- type: map_at_3
value: 37.25
- type: map_at_5
value: 39.31
- type: mrr_at_1
value: 30.9
- type: mrr_at_10
value: 40.467
- type: mrr_at_100
value: 41.423
- type: mrr_at_1000
value: 41.463
- type: mrr_at_3
value: 37.25
- type: mrr_at_5
value: 39.31
- type: ndcg_at_1
value: 30.9
- type: ndcg_at_10
value: 45.957
- type: ndcg_at_100
value: 50.735
- type: ndcg_at_1000
value: 51.861999999999995
- type: ndcg_at_3
value: 39.437
- type: ndcg_at_5
value: 43.146
- type: precision_at_1
value: 30.9
- type: precision_at_10
value: 6.35
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 15.267
- type: precision_at_5
value: 10.96
- type: recall_at_1
value: 30.9
- type: recall_at_10
value: 63.5
- type: recall_at_100
value: 86.1
- type: recall_at_1000
value: 95.1
- type: recall_at_3
value: 45.800000000000004
- type: recall_at_5
value: 54.800000000000004
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 49.765
- type: f1
value: 45.93242203574485
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 75.138
- type: map_at_10
value: 84.21300000000001
- type: map_at_100
value: 84.43
- type: map_at_1000
value: 84.441
- type: map_at_3
value: 83.071
- type: map_at_5
value: 83.853
- type: mrr_at_1
value: 80.948
- type: mrr_at_10
value: 88.175
- type: mrr_at_100
value: 88.24
- type: mrr_at_1000
value: 88.241
- type: mrr_at_3
value: 87.516
- type: mrr_at_5
value: 87.997
- type: ndcg_at_1
value: 80.948
- type: ndcg_at_10
value: 87.84100000000001
- type: ndcg_at_100
value: 88.576
- type: ndcg_at_1000
value: 88.75699999999999
- type: ndcg_at_3
value: 86.176
- type: ndcg_at_5
value: 87.214
- type: precision_at_1
value: 80.948
- type: precision_at_10
value: 10.632
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.193
- type: precision_at_5
value: 20.663
- type: recall_at_1
value: 75.138
- type: recall_at_10
value: 94.89699999999999
- type: recall_at_100
value: 97.751
- type: recall_at_1000
value: 98.833
- type: recall_at_3
value: 90.455
- type: recall_at_5
value: 93.085
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 29.45
- type: map_at_10
value: 48.596000000000004
- type: map_at_100
value: 50.70400000000001
- type: map_at_1000
value: 50.83800000000001
- type: map_at_3
value: 42.795
- type: map_at_5
value: 46.085
- type: mrr_at_1
value: 56.172999999999995
- type: mrr_at_10
value: 64.35300000000001
- type: mrr_at_100
value: 64.947
- type: mrr_at_1000
value: 64.967
- type: mrr_at_3
value: 62.653999999999996
- type: mrr_at_5
value: 63.534
- type: ndcg_at_1
value: 56.172999999999995
- type: ndcg_at_10
value: 56.593
- type: ndcg_at_100
value: 62.942
- type: ndcg_at_1000
value: 64.801
- type: ndcg_at_3
value: 53.024
- type: ndcg_at_5
value: 53.986999999999995
- type: precision_at_1
value: 56.172999999999995
- type: precision_at_10
value: 15.494
- type: precision_at_100
value: 2.222
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 35.185
- type: precision_at_5
value: 25.556
- type: recall_at_1
value: 29.45
- type: recall_at_10
value: 62.882000000000005
- type: recall_at_100
value: 85.56099999999999
- type: recall_at_1000
value: 96.539
- type: recall_at_3
value: 47.911
- type: recall_at_5
value: 54.52
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.581
- type: map_at_10
value: 68.401
- type: map_at_100
value: 69.207
- type: map_at_1000
value: 69.25200000000001
- type: map_at_3
value: 64.689
- type: map_at_5
value: 67.158
- type: mrr_at_1
value: 79.163
- type: mrr_at_10
value: 85.22999999999999
- type: mrr_at_100
value: 85.386
- type: mrr_at_1000
value: 85.39099999999999
- type: mrr_at_3
value: 84.432
- type: mrr_at_5
value: 84.952
- type: ndcg_at_1
value: 79.163
- type: ndcg_at_10
value: 75.721
- type: ndcg_at_100
value: 78.411
- type: ndcg_at_1000
value: 79.23599999999999
- type: ndcg_at_3
value: 70.68799999999999
- type: ndcg_at_5
value: 73.694
- type: precision_at_1
value: 79.163
- type: precision_at_10
value: 16.134
- type: precision_at_100
value: 1.821
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 46.446
- type: precision_at_5
value: 30.242
- type: recall_at_1
value: 39.581
- type: recall_at_10
value: 80.66799999999999
- type: recall_at_100
value: 91.033
- type: recall_at_1000
value: 96.408
- type: recall_at_3
value: 69.669
- type: recall_at_5
value: 75.604
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 45.04809542131589
- type: f1
value: 37.01181779071118
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.78120000000001
- type: ap
value: 92.52931921594387
- type: f1
value: 94.77902110732532
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.430320593468394
- type: f1
value: 79.95467268178068
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 71.05801751913393
- type: cos_sim_spearman
value: 75.47954644971965
- type: euclidean_pearson
value: 74.27472296759713
- type: euclidean_spearman
value: 75.47954201369866
- type: manhattan_pearson
value: 74.30508190186474
- type: manhattan_spearman
value: 75.51326518159436
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 24.21110921666315
- type: mrr
value: 22.863492063492064
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 61.38400000000001
- type: map_at_10
value: 70.895
- type: map_at_100
value: 71.314
- type: map_at_1000
value: 71.331
- type: map_at_3
value: 69.016
- type: map_at_5
value: 70.179
- type: mrr_at_1
value: 63.481
- type: mrr_at_10
value: 71.543
- type: mrr_at_100
value: 71.91300000000001
- type: mrr_at_1000
value: 71.928
- type: mrr_at_3
value: 69.90899999999999
- type: mrr_at_5
value: 70.907
- type: ndcg_at_1
value: 63.481
- type: ndcg_at_10
value: 74.833
- type: ndcg_at_100
value: 76.705
- type: ndcg_at_1000
value: 77.13600000000001
- type: ndcg_at_3
value: 71.236
- type: ndcg_at_5
value: 73.199
- type: precision_at_1
value: 63.481
- type: precision_at_10
value: 9.179
- type: precision_at_100
value: 1.011
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 27.044
- type: precision_at_5
value: 17.272000000000002
- type: recall_at_1
value: 61.38400000000001
- type: recall_at_10
value: 86.318
- type: recall_at_100
value: 94.786
- type: recall_at_1000
value: 98.14500000000001
- type: recall_at_3
value: 76.717
- type: recall_at_5
value: 81.416
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.363999999999997
- type: map_at_10
value: 36.022
- type: map_at_100
value: 37.229
- type: map_at_1000
value: 37.274
- type: map_at_3
value: 32.131
- type: map_at_5
value: 34.391
- type: mrr_at_1
value: 24.069
- type: mrr_at_10
value: 36.620000000000005
- type: mrr_at_100
value: 37.769999999999996
- type: mrr_at_1000
value: 37.809
- type: mrr_at_3
value: 32.846
- type: mrr_at_5
value: 35.02
- type: ndcg_at_1
value: 24.069
- type: ndcg_at_10
value: 43.056
- type: ndcg_at_100
value: 48.754
- type: ndcg_at_1000
value: 49.829
- type: ndcg_at_3
value: 35.167
- type: ndcg_at_5
value: 39.168
- type: precision_at_1
value: 24.069
- type: precision_at_10
value: 6.762
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.957
- type: precision_at_5
value: 11.023
- type: recall_at_1
value: 23.363999999999997
- type: recall_at_10
value: 64.696
- type: recall_at_100
value: 90.795
- type: recall_at_1000
value: 98.892
- type: recall_at_3
value: 43.247
- type: recall_at_5
value: 52.86300000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.11947104423166
- type: f1
value: 95.89561841159332
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.97548605240912
- type: f1
value: 92.17133696717212
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.37224816544364
- type: f1
value: 93.19978829237863
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.28719072972127
- type: f1
value: 91.28448045979604
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.8131946934385
- type: f1
value: 88.27883019362747
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 85.52260397830018
- type: f1
value: 85.15528226728568
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 86.10807113543093
- type: f1
value: 70.88498219072167
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.77120315581854
- type: f1
value: 57.97153920153224
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.93995997331554
- type: f1
value: 58.839203810064866
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.801440651425
- type: f1
value: 58.68009647839332
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 72.90785227680172
- type: f1
value: 49.83760954655788
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.24050632911391
- type: f1
value: 52.0562553541082
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.47948890383321
- type: f1
value: 63.334877563135485
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.2871553463349
- type: f1
value: 43.17658050605427
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.174176193678555
- type: f1
value: 59.236659587042425
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.226630800269
- type: f1
value: 60.951842696956184
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.94283792871555
- type: f1
value: 61.40057652844215
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.480833893745796
- type: f1
value: 52.5298332072816
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.52858103564223
- type: f1
value: 69.3770851919204
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.09213180901143
- type: f1
value: 71.13518469365879
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.31203765971756
- type: f1
value: 66.05906970865144
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.57162071284465
- type: f1
value: 77.7866172598823
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.09414929388029
- type: f1
value: 72.5712594833695
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.20914593140553
- type: f1
value: 68.90619124909186
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.74243443174176
- type: f1
value: 64.72743141749955
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.11096166778749
- type: f1
value: 72.61849933064694
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.22394082044384
- type: f1
value: 62.43648797607235
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.44855413584399
- type: f1
value: 66.56851670913659
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.4149293880296
- type: f1
value: 66.12960877904776
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.916610625420304
- type: f1
value: 54.02534600927991
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.71351714862138
- type: f1
value: 69.70227985126316
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.91257565568257
- type: f1
value: 57.06811572144974
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.25218560860793
- type: f1
value: 72.48057563104247
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.35507733691998
- type: f1
value: 73.03024649541128
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.918628110289184
- type: f1
value: 54.75590124456177
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 52.548755884330866
- type: f1
value: 51.5356975360209
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 46.44922663080027
- type: f1
value: 44.561114416830975
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.95763281775386
- type: f1
value: 50.68367245122476
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.20645595158035
- type: f1
value: 71.78450093258185
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.226630800269
- type: f1
value: 57.53988988993337
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.44922663080027
- type: f1
value: 48.58809018065056
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.3752521856086
- type: f1
value: 49.91373941436425
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.85205110961668
- type: f1
value: 67.05660019588582
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 49.1492938802959
- type: f1
value: 46.717578025393195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.93140551445865
- type: f1
value: 67.45406609372205
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.82851378614662
- type: f1
value: 71.15951964393868
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.84868863483524
- type: f1
value: 71.76056802364877
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 75.27236045729657
- type: f1
value: 72.48733090101163
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.63012777404168
- type: f1
value: 66.56444015346203
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.62743779421655
- type: f1
value: 73.82720656992142
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.15198386012105
- type: f1
value: 64.41418309797744
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.8399462004035
- type: f1
value: 56.050989519693886
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.86684599865501
- type: f1
value: 70.80682480844303
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.36718224613316
- type: f1
value: 54.998746471013774
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.150638870208475
- type: f1
value: 49.79179342620099
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.50638870208473
- type: f1
value: 49.778960742003555
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.906523201076
- type: f1
value: 66.75784022138245
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.73234700739744
- type: f1
value: 65.75016141148413
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.06792199058508
- type: f1
value: 67.90334782594083
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.09145931405515
- type: f1
value: 58.88703095210731
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.17014122394083
- type: f1
value: 68.43676277921544
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.99327505043712
- type: f1
value: 72.26813373392943
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.13987895090787
- type: f1
value: 70.29309514467575
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.37256220578345
- type: f1
value: 72.56456170538992
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 47.205783456624076
- type: f1
value: 45.905999859074434
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.8352387357095
- type: f1
value: 69.43553987525273
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.00403496973773
- type: f1
value: 65.97477215779143
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.04976462676531
- type: f1
value: 67.24581993778398
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.882985877605925
- type: f1
value: 59.995293199988794
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.75857431069267
- type: f1
value: 76.52031675299841
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.03496973772697
- type: f1
value: 79.25548063175344
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.96570275722931
- type: f1
value: 72.19110435289122
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 82.38735709482178
- type: f1
value: 82.34495627619785
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.83994620040352
- type: f1
value: 78.91526355393667
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.7350369872226
- type: f1
value: 75.919437344927
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.21721587088096
- type: f1
value: 70.82973286243262
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.59784801613988
- type: f1
value: 78.47383161087423
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.64021519838602
- type: f1
value: 68.45118053027653
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.51042367182245
- type: f1
value: 72.90013022879003
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.0551445864156
- type: f1
value: 73.45871761713292
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.54606590450571
- type: f1
value: 57.72711794953869
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.40753194351042
- type: f1
value: 76.8157455506521
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.58372562205783
- type: f1
value: 65.2654868709758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.39273705447208
- type: f1
value: 78.3592956594837
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.62004034969739
- type: f1
value: 79.78673754501855
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.29051782111634
- type: f1
value: 63.12502587609454
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.51849361129791
- type: f1
value: 56.32320906403241
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.41761936785474
- type: f1
value: 49.113762010098306
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.547410894418284
- type: f1
value: 56.87580674198118
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.89038332212507
- type: f1
value: 79.09210140529848
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.503698722259585
- type: f1
value: 61.45718858568352
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.02824478816408
- type: f1
value: 52.732738981386504
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.23671822461331
- type: f1
value: 52.688080372545286
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.5312710154674
- type: f1
value: 74.59368478550698
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.192333557498316
- type: f1
value: 50.18302290152229
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.6960322797579
- type: f1
value: 75.25331182714856
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.47679892400808
- type: f1
value: 78.24044732352424
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.36718224613315
- type: f1
value: 77.2714452985389
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.96234028244788
- type: f1
value: 78.21282127011372
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.19435104236717
- type: f1
value: 73.1963711292812
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.52118359112306
- type: f1
value: 80.4179964390288
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.65837256220577
- type: f1
value: 73.07156989634905
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.02824478816409
- type: f1
value: 62.972399027713664
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.87020847343645
- type: f1
value: 78.224240866849
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.6570275722932
- type: f1
value: 63.274871811412545
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.760591795561524
- type: f1
value: 56.73711528075771
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 57.26967047747142
- type: f1
value: 55.74735330863165
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.46133154001345
- type: f1
value: 71.9644168952811
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.70880968392737
- type: f1
value: 73.61543141070884
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.0437121721587
- type: f1
value: 74.83359868879921
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.05110961667788
- type: f1
value: 66.25869819274315
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.52118359112306
- type: f1
value: 75.92098546052303
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.92938802958977
- type: f1
value: 79.79833572573796
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.86617350369872
- type: f1
value: 77.42645654909516
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 44.6
- type: map_at_10
value: 50.019000000000005
- type: map_at_100
value: 50.611
- type: map_at_1000
value: 50.67
- type: map_at_3
value: 48.699999999999996
- type: map_at_5
value: 49.455
- type: mrr_at_1
value: 44.800000000000004
- type: mrr_at_10
value: 50.119
- type: mrr_at_100
value: 50.711
- type: mrr_at_1000
value: 50.77
- type: mrr_at_3
value: 48.8
- type: mrr_at_5
value: 49.555
- type: ndcg_at_1
value: 44.6
- type: ndcg_at_10
value: 52.754
- type: ndcg_at_100
value: 55.935
- type: ndcg_at_1000
value: 57.607
- type: ndcg_at_3
value: 50.012
- type: ndcg_at_5
value: 51.393
- type: precision_at_1
value: 44.6
- type: precision_at_10
value: 6.140000000000001
- type: precision_at_100
value: 0.77
- type: precision_at_1000
value: 0.09
- type: precision_at_3
value: 17.933
- type: precision_at_5
value: 11.44
- type: recall_at_1
value: 44.6
- type: recall_at_10
value: 61.4
- type: recall_at_100
value: 77.0
- type: recall_at_1000
value: 90.4
- type: recall_at_3
value: 53.800000000000004
- type: recall_at_5
value: 57.199999999999996
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 38.192667527616315
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.44738902946689
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.59661273103955
- type: mrr
value: 33.82024242497473
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 73.31333333333335
- type: f1
value: 73.0873466527602
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.471
- type: map_at_10
value: 14.142
- type: map_at_100
value: 18.179000000000002
- type: map_at_1000
value: 19.772000000000002
- type: map_at_3
value: 9.716
- type: map_at_5
value: 11.763
- type: mrr_at_1
value: 51.393
- type: mrr_at_10
value: 58.814
- type: mrr_at_100
value: 59.330000000000005
- type: mrr_at_1000
value: 59.35
- type: mrr_at_3
value: 56.398
- type: mrr_at_5
value: 58.038999999999994
- type: ndcg_at_1
value: 49.69
- type: ndcg_at_10
value: 38.615
- type: ndcg_at_100
value: 35.268
- type: ndcg_at_1000
value: 43.745
- type: ndcg_at_3
value: 43.187
- type: ndcg_at_5
value: 41.528999999999996
- type: precision_at_1
value: 51.083999999999996
- type: precision_at_10
value: 29.474
- type: precision_at_100
value: 9.167
- type: precision_at_1000
value: 2.2089999999999996
- type: precision_at_3
value: 40.351
- type: precision_at_5
value: 36.285000000000004
- type: recall_at_1
value: 5.471
- type: recall_at_10
value: 19.242
- type: recall_at_100
value: 37.14
- type: recall_at_1000
value: 68.35900000000001
- type: recall_at_3
value: 10.896
- type: recall_at_5
value: 14.75
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.499
- type: map_at_10
value: 55.862
- type: map_at_100
value: 56.667
- type: map_at_1000
value: 56.684999999999995
- type: map_at_3
value: 51.534
- type: map_at_5
value: 54.2
- type: mrr_at_1
value: 44.351
- type: mrr_at_10
value: 58.567
- type: mrr_at_100
value: 59.099000000000004
- type: mrr_at_1000
value: 59.109
- type: mrr_at_3
value: 55.218999999999994
- type: mrr_at_5
value: 57.391999999999996
- type: ndcg_at_1
value: 44.322
- type: ndcg_at_10
value: 63.535
- type: ndcg_at_100
value: 66.654
- type: ndcg_at_1000
value: 66.991
- type: ndcg_at_3
value: 55.701
- type: ndcg_at_5
value: 60.06700000000001
- type: precision_at_1
value: 44.322
- type: precision_at_10
value: 10.026
- type: precision_at_100
value: 1.18
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 24.865000000000002
- type: precision_at_5
value: 17.48
- type: recall_at_1
value: 39.499
- type: recall_at_10
value: 84.053
- type: recall_at_100
value: 97.11
- type: recall_at_1000
value: 99.493
- type: recall_at_3
value: 64.091
- type: recall_at_5
value: 74.063
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: None
metrics:
- type: cos_sim_accuracy
value: 61.18029236599891
- type: cos_sim_ap
value: 64.18398769398412
- type: cos_sim_f1
value: 67.96347757046446
- type: cos_sim_precision
value: 54.4529262086514
- type: cos_sim_recall
value: 90.3907074973601
- type: dot_accuracy
value: 61.18029236599891
- type: dot_ap
value: 64.18393484706077
- type: dot_f1
value: 67.96347757046446
- type: dot_precision
value: 54.4529262086514
- type: dot_recall
value: 90.3907074973601
- type: euclidean_accuracy
value: 61.18029236599891
- type: euclidean_ap
value: 64.18395024821486
- type: euclidean_f1
value: 67.96347757046446
- type: euclidean_precision
value: 54.4529262086514
- type: euclidean_recall
value: 90.3907074973601
- type: manhattan_accuracy
value: 61.451001624255554
- type: manhattan_ap
value: 64.38232708763513
- type: manhattan_f1
value: 68.05860805860804
- type: manhattan_precision
value: 52.10319685922602
- type: manhattan_recall
value: 98.09926082365365
- type: max_accuracy
value: 61.451001624255554
- type: max_ap
value: 64.38232708763513
- type: max_f1
value: 68.05860805860804
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 92.19000000000001
- type: ap
value: 89.73918431886767
- type: f1
value: 92.17175032574507
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 15.079320253752224
- type: cos_sim_spearman
value: 16.813772504404263
- type: euclidean_pearson
value: 19.476541162041762
- type: euclidean_spearman
value: 16.813772498098782
- type: manhattan_pearson
value: 19.497429832915277
- type: manhattan_spearman
value: 16.869600674180607
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 30.36139599797913
- type: cos_sim_spearman
value: 31.80296402851347
- type: euclidean_pearson
value: 30.10387888252793
- type: euclidean_spearman
value: 31.80297780103808
- type: manhattan_pearson
value: 30.86720382849436
- type: manhattan_spearman
value: 32.70491131366606
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.911
- type: map_at_10
value: 86.087
- type: map_at_100
value: 86.701
- type: map_at_1000
value: 86.715
- type: map_at_3
value: 83.231
- type: map_at_5
value: 85.051
- type: mrr_at_1
value: 82.75
- type: mrr_at_10
value: 88.759
- type: mrr_at_100
value: 88.844
- type: mrr_at_1000
value: 88.844
- type: mrr_at_3
value: 87.935
- type: mrr_at_5
value: 88.504
- type: ndcg_at_1
value: 82.75
- type: ndcg_at_10
value: 89.605
- type: ndcg_at_100
value: 90.664
- type: ndcg_at_1000
value: 90.733
- type: ndcg_at_3
value: 87.03
- type: ndcg_at_5
value: 88.473
- type: precision_at_1
value: 82.75
- type: precision_at_10
value: 13.575000000000001
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.153
- type: precision_at_5
value: 25.008000000000003
- type: recall_at_1
value: 71.911
- type: recall_at_10
value: 96.261
- type: recall_at_100
value: 99.72800000000001
- type: recall_at_1000
value: 99.993
- type: recall_at_3
value: 88.762
- type: recall_at_5
value: 92.949
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 57.711581165572376
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 66.48938885750297
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.7379999999999995
- type: map_at_10
value: 9.261
- type: map_at_100
value: 11.001
- type: map_at_1000
value: 11.262
- type: map_at_3
value: 6.816
- type: map_at_5
value: 8.0
- type: mrr_at_1
value: 18.4
- type: mrr_at_10
value: 28.755999999999997
- type: mrr_at_100
value: 29.892000000000003
- type: mrr_at_1000
value: 29.961
- type: mrr_at_3
value: 25.467000000000002
- type: mrr_at_5
value: 27.332
- type: ndcg_at_1
value: 18.4
- type: ndcg_at_10
value: 16.296
- type: ndcg_at_100
value: 23.52
- type: ndcg_at_1000
value: 28.504
- type: ndcg_at_3
value: 15.485
- type: ndcg_at_5
value: 13.471
- type: precision_at_1
value: 18.4
- type: precision_at_10
value: 8.469999999999999
- type: precision_at_100
value: 1.8950000000000002
- type: precision_at_1000
value: 0.309
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.84
- type: recall_at_1
value: 3.7379999999999995
- type: recall_at_10
value: 17.185
- type: recall_at_100
value: 38.397
- type: recall_at_1000
value: 62.798
- type: recall_at_3
value: 8.896999999999998
- type: recall_at_5
value: 12.021999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 86.43977757480083
- type: cos_sim_spearman
value: 82.64182475199533
- type: euclidean_pearson
value: 83.71756009999591
- type: euclidean_spearman
value: 82.64182331395057
- type: manhattan_pearson
value: 83.8028936913025
- type: manhattan_spearman
value: 82.71024597804252
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.85653060698912
- type: cos_sim_spearman
value: 79.65598885228324
- type: euclidean_pearson
value: 83.1205137628455
- type: euclidean_spearman
value: 79.65629387709038
- type: manhattan_pearson
value: 83.71108853545837
- type: manhattan_spearman
value: 80.25617619716708
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.22921688565664
- type: cos_sim_spearman
value: 88.42662103041957
- type: euclidean_pearson
value: 87.91679798473325
- type: euclidean_spearman
value: 88.42662103041957
- type: manhattan_pearson
value: 88.16927537961303
- type: manhattan_spearman
value: 88.81581680062541
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 86.77261424554293
- type: cos_sim_spearman
value: 84.53930146434155
- type: euclidean_pearson
value: 85.67420491389697
- type: euclidean_spearman
value: 84.53929771783851
- type: manhattan_pearson
value: 85.74306784515618
- type: manhattan_spearman
value: 84.7399304675314
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 89.86138395166455
- type: cos_sim_spearman
value: 90.42577823022054
- type: euclidean_pearson
value: 89.8787763797515
- type: euclidean_spearman
value: 90.42577823022054
- type: manhattan_pearson
value: 89.9592937492158
- type: manhattan_spearman
value: 90.63535505335524
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 86.5176674585941
- type: cos_sim_spearman
value: 87.6842917085397
- type: euclidean_pearson
value: 86.70213081520711
- type: euclidean_spearman
value: 87.6842917085397
- type: manhattan_pearson
value: 86.83702628983627
- type: manhattan_spearman
value: 87.87791000374443
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 83.86395454805867
- type: cos_sim_spearman
value: 83.69454595252267
- type: euclidean_pearson
value: 83.04743892608313
- type: euclidean_spearman
value: 83.69454026433006
- type: manhattan_pearson
value: 83.4032095553322
- type: manhattan_spearman
value: 84.11527379013802
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 81.80249894729546
- type: cos_sim_spearman
value: 81.87004960533409
- type: euclidean_pearson
value: 80.0392760044179
- type: euclidean_spearman
value: 81.87004960533409
- type: manhattan_pearson
value: 80.38096542355912
- type: manhattan_spearman
value: 82.40774679630341
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 77.6158201787172
- type: cos_sim_spearman
value: 77.934651044009
- type: euclidean_pearson
value: 77.7874683895269
- type: euclidean_spearman
value: 77.934651044009
- type: manhattan_pearson
value: 78.36151849193052
- type: manhattan_spearman
value: 78.52439586349938
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.04363311392207
- type: cos_sim_spearman
value: 87.30483659369973
- type: euclidean_pearson
value: 87.62634489502616
- type: euclidean_spearman
value: 87.30483659369973
- type: manhattan_pearson
value: 88.02340837141445
- type: manhattan_spearman
value: 87.55012003294
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 91.69172851958248
- type: cos_sim_spearman
value: 91.7546879482416
- type: euclidean_pearson
value: 91.84843039183963
- type: euclidean_spearman
value: 91.7546879482416
- type: manhattan_pearson
value: 91.72325753804357
- type: manhattan_spearman
value: 91.55330259513397
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 73.95572901084864
- type: cos_sim_spearman
value: 72.56217821552626
- type: euclidean_pearson
value: 74.24242980323574
- type: euclidean_spearman
value: 72.56217821552626
- type: manhattan_pearson
value: 74.57473362519922
- type: manhattan_spearman
value: 72.76048826648497
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.93329396008296
- type: cos_sim_spearman
value: 88.2406635486219
- type: euclidean_pearson
value: 87.49687343908533
- type: euclidean_spearman
value: 88.2406635486219
- type: manhattan_pearson
value: 88.14088309231084
- type: manhattan_spearman
value: 88.93314020908534
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.70124451546057
- type: cos_sim_spearman
value: 87.45988160052252
- type: euclidean_pearson
value: 88.44395505247728
- type: euclidean_spearman
value: 87.45988160052252
- type: manhattan_pearson
value: 88.69269783495425
- type: manhattan_spearman
value: 87.65383425621
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.64109149761346
- type: cos_sim_spearman
value: 88.06459637689733
- type: euclidean_pearson
value: 88.02313315797703
- type: euclidean_spearman
value: 88.06459637689733
- type: manhattan_pearson
value: 88.28328539133253
- type: manhattan_spearman
value: 88.06605708379142
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9040028177525
- type: cos_sim_spearman
value: 89.68152202933464
- type: euclidean_pearson
value: 89.23684469601253
- type: euclidean_spearman
value: 89.68152202933464
- type: manhattan_pearson
value: 89.59504307277454
- type: manhattan_spearman
value: 89.88060100313582
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.69891585325125
- type: cos_sim_spearman
value: 88.25252785071736
- type: euclidean_pearson
value: 87.99932873748662
- type: euclidean_spearman
value: 88.25252785071736
- type: manhattan_pearson
value: 88.26959683009446
- type: manhattan_spearman
value: 88.32583227300715
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.53235909794135
- type: cos_sim_spearman
value: 66.97521740529574
- type: euclidean_pearson
value: 68.19502223613912
- type: euclidean_spearman
value: 66.97521740529574
- type: manhattan_pearson
value: 68.39070714774539
- type: manhattan_spearman
value: 67.1072812364868
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 43.715742021204775
- type: cos_sim_spearman
value: 49.12255971271453
- type: euclidean_pearson
value: 40.76848562610837
- type: euclidean_spearman
value: 49.12255971271453
- type: manhattan_pearson
value: 40.92204625614112
- type: manhattan_spearman
value: 49.23333793661129
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.35268345563588
- type: cos_sim_spearman
value: 66.99661626042061
- type: euclidean_pearson
value: 65.85589122857066
- type: euclidean_spearman
value: 66.99661626042061
- type: manhattan_pearson
value: 66.78454301512294
- type: manhattan_spearman
value: 67.17570330149233
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 33.36599908204445
- type: cos_sim_spearman
value: 39.20768331939503
- type: euclidean_pearson
value: 22.16066769530468
- type: euclidean_spearman
value: 39.20768331939503
- type: manhattan_pearson
value: 22.386053195546022
- type: manhattan_spearman
value: 39.70172817465986
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.06813956986753
- type: cos_sim_spearman
value: 68.72065117995668
- type: euclidean_pearson
value: 66.97373456344194
- type: euclidean_spearman
value: 68.72065117995668
- type: manhattan_pearson
value: 67.34907265771595
- type: manhattan_spearman
value: 68.73705769957843
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.17664865207108
- type: cos_sim_spearman
value: 54.115568323148864
- type: euclidean_pearson
value: 48.56418162879182
- type: euclidean_spearman
value: 54.115568323148864
- type: manhattan_pearson
value: 48.85951643453165
- type: manhattan_spearman
value: 54.13599784169052
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.87514136275987
- type: cos_sim_spearman
value: 60.82923573674973
- type: euclidean_pearson
value: 53.724183308215615
- type: euclidean_spearman
value: 60.82923573674973
- type: manhattan_pearson
value: 53.954305573102445
- type: manhattan_spearman
value: 60.957483900644526
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.55001413648593
- type: cos_sim_spearman
value: 63.395777040381276
- type: euclidean_pearson
value: 59.869972550293305
- type: euclidean_spearman
value: 63.395777040381276
- type: manhattan_pearson
value: 61.16195496847885
- type: manhattan_spearman
value: 63.41968682525581
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 79.13334972675852
- type: cos_sim_spearman
value: 79.86263136371802
- type: euclidean_pearson
value: 78.2433603592541
- type: euclidean_spearman
value: 79.86263136371802
- type: manhattan_pearson
value: 78.87337106318412
- type: manhattan_spearman
value: 80.31230584758441
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.559700748242356
- type: cos_sim_spearman
value: 60.92342109509558
- type: euclidean_pearson
value: 66.07256437521119
- type: euclidean_spearman
value: 60.92342109509558
- type: manhattan_pearson
value: 67.72769744612663
- type: manhattan_spearman
value: 59.64714507774168
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.93491616145891
- type: cos_sim_spearman
value: 75.84242594400156
- type: euclidean_pearson
value: 74.87279745626121
- type: euclidean_spearman
value: 75.84242594400156
- type: manhattan_pearson
value: 76.47764144677505
- type: manhattan_spearman
value: 77.08411157845183
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.75624124540954
- type: cos_sim_spearman
value: 75.8667941654703
- type: euclidean_pearson
value: 73.74314588451925
- type: euclidean_spearman
value: 75.8667941654703
- type: manhattan_pearson
value: 73.99641425871518
- type: manhattan_spearman
value: 76.1982840205817
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 75.20898141298767
- type: cos_sim_spearman
value: 73.18060375331436
- type: euclidean_pearson
value: 75.44489280944619
- type: euclidean_spearman
value: 73.18060375331436
- type: manhattan_pearson
value: 75.65451039552286
- type: manhattan_spearman
value: 72.97744006123156
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.04278252247816
- type: cos_sim_spearman
value: 71.8846446821539
- type: euclidean_pearson
value: 73.16043307050612
- type: euclidean_spearman
value: 71.8846446821539
- type: manhattan_pearson
value: 74.76905116839777
- type: manhattan_spearman
value: 72.66237093518471
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.71033173838558
- type: cos_sim_spearman
value: 75.043122881885
- type: euclidean_pearson
value: 72.77579680345087
- type: euclidean_spearman
value: 75.043122881885
- type: manhattan_pearson
value: 72.99901534854922
- type: manhattan_spearman
value: 75.15418335015957
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.75733447190482
- type: cos_sim_spearman
value: 61.38968334176681
- type: euclidean_pearson
value: 55.479231520643744
- type: euclidean_spearman
value: 61.38968334176681
- type: manhattan_pearson
value: 56.05230571465244
- type: manhattan_spearman
value: 62.69383054007398
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 41.72244325050302
- type: cos_sim_spearman
value: 54.47476909084119
- type: euclidean_pearson
value: 43.94629756436873
- type: euclidean_spearman
value: 54.47476909084119
- type: manhattan_pearson
value: 46.36533046394657
- type: manhattan_spearman
value: 54.87509243633636
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.75183711835146
- type: cos_sim_spearman
value: 84.51542547285167
- type: euclidean_pearson
value: 71.84188960126669
- type: euclidean_spearman
value: 84.51542547285167
- type: manhattan_pearson
value: 73.94847166379994
- type: manhattan_spearman
value: 84.51542547285167
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 81.78690149086131
- type: cos_sim_spearman
value: 81.81202616916873
- type: euclidean_pearson
value: 80.98792254251062
- type: euclidean_spearman
value: 81.81202616916873
- type: manhattan_pearson
value: 81.46953021346732
- type: manhattan_spearman
value: 82.34259562492315
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.68273341294419
- type: cos_sim_spearman
value: 88.59927164210958
- type: euclidean_pearson
value: 88.10745681818025
- type: euclidean_spearman
value: 88.59927164210958
- type: manhattan_pearson
value: 88.25166703784649
- type: manhattan_spearman
value: 88.85343247873482
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.3340463345719
- type: mrr
value: 96.5182611506141
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.967000000000006
- type: map_at_10
value: 71.873
- type: map_at_100
value: 72.271
- type: map_at_1000
value: 72.292
- type: map_at_3
value: 69.006
- type: map_at_5
value: 70.856
- type: mrr_at_1
value: 63.666999999999994
- type: mrr_at_10
value: 72.929
- type: mrr_at_100
value: 73.26
- type: mrr_at_1000
value: 73.282
- type: mrr_at_3
value: 71.111
- type: mrr_at_5
value: 72.328
- type: ndcg_at_1
value: 63.666999999999994
- type: ndcg_at_10
value: 76.414
- type: ndcg_at_100
value: 78.152
- type: ndcg_at_1000
value: 78.604
- type: ndcg_at_3
value: 71.841
- type: ndcg_at_5
value: 74.435
- type: precision_at_1
value: 63.666999999999994
- type: precision_at_10
value: 10.067
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.667
- type: precision_at_5
value: 18.467
- type: recall_at_1
value: 60.967000000000006
- type: recall_at_10
value: 88.922
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 77.228
- type: recall_at_5
value: 83.428
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.82277227722773
- type: cos_sim_ap
value: 95.66279851444406
- type: cos_sim_f1
value: 90.9367088607595
- type: cos_sim_precision
value: 92.1025641025641
- type: cos_sim_recall
value: 89.8
- type: dot_accuracy
value: 99.82277227722773
- type: dot_ap
value: 95.66279851444406
- type: dot_f1
value: 90.9367088607595
- type: dot_precision
value: 92.1025641025641
- type: dot_recall
value: 89.8
- type: euclidean_accuracy
value: 99.82277227722773
- type: euclidean_ap
value: 95.66279851444406
- type: euclidean_f1
value: 90.9367088607595
- type: euclidean_precision
value: 92.1025641025641
- type: euclidean_recall
value: 89.8
- type: manhattan_accuracy
value: 99.82673267326733
- type: manhattan_ap
value: 95.86094873177069
- type: manhattan_f1
value: 91.26788357178096
- type: manhattan_precision
value: 90.06815968841285
- type: manhattan_recall
value: 92.5
- type: max_accuracy
value: 99.82673267326733
- type: max_ap
value: 95.86094873177069
- type: max_f1
value: 91.26788357178096
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 73.09533925852372
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 45.90745648090035
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.91147686504404
- type: mrr
value: 56.03900082760377
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.46908662038217
- type: cos_sim_spearman
value: 31.40325730367437
- type: dot_pearson
value: 31.469083969291894
- type: dot_spearman
value: 31.40325730367437
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 66.90300783402137
- type: mrr
value: 77.06451972574179
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 25.82
- type: map_at_10
value: 72.32300000000001
- type: map_at_100
value: 76.198
- type: map_at_1000
value: 76.281
- type: map_at_3
value: 50.719
- type: map_at_5
value: 62.326
- type: mrr_at_1
value: 86.599
- type: mrr_at_10
value: 89.751
- type: mrr_at_100
value: 89.876
- type: mrr_at_1000
value: 89.88000000000001
- type: mrr_at_3
value: 89.151
- type: mrr_at_5
value: 89.519
- type: ndcg_at_1
value: 86.599
- type: ndcg_at_10
value: 80.676
- type: ndcg_at_100
value: 85.03
- type: ndcg_at_1000
value: 85.854
- type: ndcg_at_3
value: 82.057
- type: ndcg_at_5
value: 80.537
- type: precision_at_1
value: 86.599
- type: precision_at_10
value: 40.373
- type: precision_at_100
value: 4.95
- type: precision_at_1000
value: 0.514
- type: precision_at_3
value: 71.918
- type: precision_at_5
value: 60.246
- type: recall_at_1
value: 25.82
- type: recall_at_10
value: 79.905
- type: recall_at_100
value: 93.88499999999999
- type: recall_at_1000
value: 98.073
- type: recall_at_3
value: 52.623
- type: recall_at_5
value: 66.233
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: None
metrics:
- type: accuracy
value: 47.050000000000004
- type: f1
value: 45.704071498353294
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.243
- type: map_at_10
value: 2.278
- type: map_at_100
value: 14.221
- type: map_at_1000
value: 33.474
- type: map_at_3
value: 0.7270000000000001
- type: map_at_5
value: 1.183
- type: mrr_at_1
value: 94.0
- type: mrr_at_10
value: 97.0
- type: mrr_at_100
value: 97.0
- type: mrr_at_1000
value: 97.0
- type: mrr_at_3
value: 97.0
- type: mrr_at_5
value: 97.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 87.249
- type: ndcg_at_100
value: 67.876
- type: ndcg_at_1000
value: 59.205
- type: ndcg_at_3
value: 90.12299999999999
- type: ndcg_at_5
value: 89.126
- type: precision_at_1
value: 94.0
- type: precision_at_10
value: 90.8
- type: precision_at_100
value: 69.28
- type: precision_at_1000
value: 25.85
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.243
- type: recall_at_10
value: 2.392
- type: recall_at_100
value: 16.982
- type: recall_at_1000
value: 55.214
- type: recall_at_3
value: 0.745
- type: recall_at_5
value: 1.2229999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.5
- type: f1
value: 67.05501804646966
- type: precision
value: 65.73261904761904
- type: recall
value: 70.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.14450867052022
- type: f1
value: 70.98265895953759
- type: precision
value: 69.26782273603082
- type: recall
value: 75.14450867052022
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 33.170731707317074
- type: f1
value: 29.92876500193573
- type: precision
value: 28.669145894755648
- type: recall
value: 33.170731707317074
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.13333333333333
- type: precision
value: 93.46666666666667
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.6
- type: f1
value: 99.46666666666665
- type: precision
value: 99.4
- type: recall
value: 99.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.39999999999999
- type: precision
value: 96.0
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.5
- type: f1
value: 92.99666666666667
- type: precision
value: 92.31666666666666
- type: recall
value: 94.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.82089552238806
- type: f1
value: 81.59203980099502
- type: precision
value: 79.60199004975124
- type: recall
value: 85.82089552238806
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.5
- type: f1
value: 75.11246031746032
- type: precision
value: 73.38734126984127
- type: recall
value: 79.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.390243902439025
- type: f1
value: 38.48896631823461
- type: precision
value: 36.57220286488579
- type: recall
value: 44.390243902439025
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.57333333333334
- type: precision
value: 86.34166666666665
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.82138517618469
- type: f1
value: 85.98651854423423
- type: precision
value: 84.79257073424753
- type: recall
value: 88.82138517618469
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.04347826086956
- type: f1
value: 72.32108147606868
- type: precision
value: 70.37207357859532
- type: recall
value: 77.04347826086956
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 53.04347826086957
- type: f1
value: 46.88868184955141
- type: precision
value: 44.71730105643149
- type: recall
value: 53.04347826086957
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.0
- type: f1
value: 62.891813186813195
- type: precision
value: 61.037906162464985
- type: recall
value: 68.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.3
- type: f1
value: 82.82000000000001
- type: precision
value: 81.25690476190475
- type: recall
value: 86.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.87816646562122
- type: f1
value: 63.53054933272062
- type: precision
value: 61.47807816331196
- type: recall
value: 68.87816646562122
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.4
- type: f1
value: 68.99388888888889
- type: precision
value: 66.81035714285713
- type: recall
value: 74.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5
- type: f1
value: 87.93666666666667
- type: precision
value: 86.825
- type: recall
value: 90.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.7
- type: f1
value: 88.09
- type: precision
value: 86.85833333333333
- type: recall
value: 90.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.61904761904762
- type: f1
value: 62.30239247214037
- type: precision
value: 60.340702947845806
- type: recall
value: 67.61904761904762
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.81285714285714
- type: precision
value: 72.21570818070818
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.8
- type: f1
value: 89.66666666666667
- type: precision
value: 88.66666666666666
- type: recall
value: 91.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.6
- type: f1
value: 96.85666666666665
- type: precision
value: 96.50833333333333
- type: recall
value: 97.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.39999999999999
- type: f1
value: 93.98333333333333
- type: precision
value: 93.30000000000001
- type: recall
value: 95.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.0
- type: f1
value: 81.31538461538462
- type: precision
value: 79.70666666666666
- type: recall
value: 85.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.60000000000001
- type: f1
value: 89.81888888888888
- type: precision
value: 89.08583333333333
- type: recall
value: 91.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.3
- type: f1
value: 38.8623088023088
- type: precision
value: 37.03755623461505
- type: recall
value: 44.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.75
- type: precision
value: 93.05
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.1
- type: f1
value: 98.8
- type: precision
value: 98.65
- type: recall
value: 99.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.6765498652291
- type: f1
value: 63.991785393402644
- type: precision
value: 61.7343729944808
- type: recall
value: 69.6765498652291
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.0
- type: f1
value: 42.79341029341029
- type: precision
value: 40.25098358431692
- type: recall
value: 50.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.7
- type: f1
value: 87.19023809523809
- type: precision
value: 86.12595238095237
- type: recall
value: 89.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.72727272727273
- type: f1
value: 37.78789518562245
- type: precision
value: 36.24208471267295
- type: recall
value: 42.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.26205450733752
- type: f1
value: 70.72842833849123
- type: precision
value: 68.93256464011182
- type: recall
value: 75.26205450733752
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.96666666666668
- type: precision
value: 93.42
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.26459143968872
- type: f1
value: 72.40190419178747
- type: precision
value: 70.84954604409856
- type: recall
value: 76.26459143968872
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.82905982905983
- type: f1
value: 52.2100122100122
- type: precision
value: 49.52516619183286
- type: recall
value: 59.82905982905983
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.69999999999999
- type: f1
value: 77.41714285714286
- type: precision
value: 75.64833333333334
- type: recall
value: 81.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.45
- type: precision
value: 93.93333333333334
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.41121495327103
- type: f1
value: 52.73495974430554
- type: precision
value: 50.717067200712066
- type: recall
value: 58.41121495327103
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.3
- type: f1
value: 69.20371794871795
- type: precision
value: 67.6597557997558
- type: recall
value: 73.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5
- type: f1
value: 95.51666666666667
- type: precision
value: 95.05
- type: recall
value: 96.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.4
- type: f1
value: 73.88856643356644
- type: precision
value: 72.01373015873016
- type: recall
value: 78.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.3
- type: f1
value: 94.09666666666668
- type: precision
value: 93.53333333333332
- type: recall
value: 95.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.94
- type: precision
value: 91.10833333333333
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.89999999999999
- type: precision
value: 95.46666666666668
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.5
- type: f1
value: 66.00635642135641
- type: precision
value: 64.36345238095238
- type: recall
value: 70.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.44388888888889
- type: precision
value: 89.5767857142857
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.0
- type: f1
value: 43.15372775372776
- type: precision
value: 41.53152510162313
- type: recall
value: 48.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 16.7
- type: f1
value: 14.198431372549017
- type: precision
value: 13.411765873015872
- type: recall
value: 16.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.7
- type: f1
value: 81.81666666666666
- type: precision
value: 80.10833333333332
- type: recall
value: 85.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.64285714285714
- type: f1
value: 64.745670995671
- type: precision
value: 62.916666666666664
- type: recall
value: 69.64285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 54.665203073545555
- type: f1
value: 48.55366630916923
- type: precision
value: 46.35683318998357
- type: recall
value: 54.665203073545555
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 4.8
- type: f1
value: 3.808587223587223
- type: precision
value: 3.5653174603174604
- type: recall
value: 4.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.77333333333333
- type: precision
value: 95.39166666666667
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.39999999999999
- type: f1
value: 94.44
- type: precision
value: 93.975
- type: recall
value: 95.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.0
- type: f1
value: 37.024908424908425
- type: precision
value: 35.365992063492065
- type: recall
value: 42.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.7
- type: f1
value: 62.20460835058661
- type: precision
value: 60.590134587634594
- type: recall
value: 66.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.3
- type: f1
value: 96.46666666666667
- type: precision
value: 96.06666666666668
- type: recall
value: 97.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.3
- type: f1
value: 41.96905408317173
- type: precision
value: 40.18741402116402
- type: recall
value: 47.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.2
- type: f1
value: 76.22690476190476
- type: precision
value: 74.63539682539682
- type: recall
value: 80.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.0
- type: f1
value: 94.83333333333333
- type: precision
value: 94.26666666666668
- type: recall
value: 96.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.7
- type: f1
value: 87.24333333333334
- type: precision
value: 86.17
- type: recall
value: 89.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.36496350364964
- type: f1
value: 44.795520780922246
- type: precision
value: 43.09002433090024
- type: recall
value: 50.36496350364964
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 18.8
- type: f1
value: 16.242864357864356
- type: precision
value: 15.466596638655464
- type: recall
value: 18.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.92333333333333
- type: precision
value: 93.30833333333332
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.4
- type: f1
value: 91.42333333333333
- type: precision
value: 90.50833333333334
- type: recall
value: 93.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 26.190476190476193
- type: f1
value: 22.05208151636723
- type: precision
value: 21.09292328042328
- type: recall
value: 26.190476190476193
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.2
- type: f1
value: 14.021009731460952
- type: precision
value: 13.1389886698243
- type: recall
value: 17.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.67494824016563
- type: f1
value: 74.24430641821947
- type: precision
value: 72.50747642051991
- type: recall
value: 78.67494824016563
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.19999999999999
- type: f1
value: 92.54
- type: precision
value: 91.75833333333334
- type: recall
value: 94.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.78666666666666
- type: precision
value: 86.69833333333334
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.7
- type: f1
value: 12.19206214842218
- type: precision
value: 11.526261904761904
- type: recall
value: 14.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.16017316017316
- type: f1
value: 67.44858316286889
- type: precision
value: 65.23809523809523
- type: recall
value: 73.16017316017316
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.19083969465649
- type: f1
value: 70.33078880407125
- type: precision
value: 68.3969465648855
- type: recall
value: 75.19083969465649
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 62.154294032023294
- type: f1
value: 55.86030821838681
- type: precision
value: 53.53509623160277
- type: recall
value: 62.154294032023294
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 83.9652380952381
- type: precision
value: 82.84242424242424
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.50282485875707
- type: f1
value: 91.54425612052731
- type: precision
value: 90.65442561205272
- type: recall
value: 93.50282485875707
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 11.4
- type: f1
value: 9.189775870222714
- type: precision
value: 8.66189886502811
- type: recall
value: 11.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.4
- type: f1
value: 91.88666666666666
- type: precision
value: 91.21444444444444
- type: recall
value: 93.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.0
- type: f1
value: 40.51069226095542
- type: precision
value: 38.57804926010808
- type: recall
value: 46.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.0
- type: f1
value: 89.11333333333333
- type: precision
value: 88.27000000000001
- type: recall
value: 91.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.39999999999999
- type: f1
value: 92.95
- type: precision
value: 92.27000000000001
- type: recall
value: 94.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.2
- type: f1
value: 11.73701698770113
- type: precision
value: 11.079207014736676
- type: recall
value: 14.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.14745308310992
- type: f1
value: 59.665707393589415
- type: precision
value: 57.560853653346946
- type: recall
value: 65.14745308310992
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.39999999999999
- type: f1
value: 94.0
- type: precision
value: 93.33333333333333
- type: recall
value: 95.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.56521739130434
- type: f1
value: 62.92490118577074
- type: precision
value: 60.27009222661397
- type: recall
value: 69.56521739130434
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 40.140845070422536
- type: f1
value: 35.96411804158283
- type: precision
value: 34.89075869357559
- type: recall
value: 40.140845070422536
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.86826347305389
- type: f1
value: 59.646248628284546
- type: precision
value: 57.22982606216139
- type: recall
value: 65.86826347305389
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.48333333333333
- type: precision
value: 92.83666666666667
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.783251231527096
- type: f1
value: 42.006447302013804
- type: precision
value: 40.12747105111637
- type: recall
value: 47.783251231527096
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.71830985915493
- type: f1
value: 64.80266212660578
- type: precision
value: 63.08098591549296
- type: recall
value: 69.71830985915493
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.94871794871796
- type: f1
value: 61.59912309912309
- type: precision
value: 59.17338217338218
- type: recall
value: 67.94871794871796
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333335
- type: precision
value: 94.75
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.14613778705638
- type: f1
value: 65.4349338900487
- type: precision
value: 63.57599255302805
- type: recall
value: 70.14613778705638
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.2
- type: f1
value: 7.622184434339607
- type: precision
value: 7.287048159682417
- type: recall
value: 9.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.85016286644951
- type: f1
value: 72.83387622149837
- type: precision
value: 70.58450959102424
- type: recall
value: 77.85016286644951
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.8
- type: f1
value: 88.84333333333333
- type: precision
value: 87.96666666666665
- type: recall
value: 90.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.14
- type: precision
value: 92.49833333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.25196850393701
- type: f1
value: 80.94488188976378
- type: precision
value: 79.65879265091863
- type: recall
value: 84.25196850393701
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.89666666666666
- type: precision
value: 85.7
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.797783933518005
- type: f1
value: 37.30617360155193
- type: precision
value: 35.34933825792552
- type: recall
value: 42.797783933518005
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 94.93333333333332
- type: precision
value: 94.38333333333333
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 54.807692307692314
- type: f1
value: 49.506903353057204
- type: precision
value: 47.54807692307693
- type: recall
value: 54.807692307692314
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.1
- type: f1
value: 83.61857142857143
- type: precision
value: 81.975
- type: recall
value: 87.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.10000000000001
- type: f1
value: 88.76333333333332
- type: precision
value: 87.67
- type: recall
value: 91.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.28999999999999
- type: precision
value: 90.44500000000001
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 39.97641509433962
- type: f1
value: 33.12271889998028
- type: precision
value: 30.95185381542554
- type: recall
value: 39.97641509433962
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.60000000000001
- type: f1
value: 90.69
- type: precision
value: 89.84500000000001
- type: recall
value: 92.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.07299270072993
- type: f1
value: 93.64355231143554
- type: precision
value: 92.94403892944038
- type: recall
value: 95.07299270072993
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.9
- type: f1
value: 89.61333333333333
- type: precision
value: 88.53333333333333
- type: recall
value: 91.9
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 64.68478289806511
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 57.53010296184097
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.519
- type: map_at_10
value: 10.31
- type: map_at_100
value: 16.027
- type: map_at_1000
value: 17.827
- type: map_at_3
value: 5.721
- type: map_at_5
value: 7.7829999999999995
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 52.642999999999994
- type: mrr_at_100
value: 53.366
- type: mrr_at_1000
value: 53.366
- type: mrr_at_3
value: 48.638999999999996
- type: mrr_at_5
value: 50.578
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 26.394000000000002
- type: ndcg_at_100
value: 36.41
- type: ndcg_at_1000
value: 49.206
- type: ndcg_at_3
value: 31.694
- type: ndcg_at_5
value: 29.529
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.286
- type: precision_at_1000
value: 1.5610000000000002
- type: precision_at_3
value: 34.014
- type: precision_at_5
value: 29.796
- type: recall_at_1
value: 2.519
- type: recall_at_10
value: 17.091
- type: recall_at_100
value: 45.429
- type: recall_at_1000
value: 84.621
- type: recall_at_3
value: 7.208
- type: recall_at_5
value: 10.523
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.58659999999999
- type: ap
value: 14.735696532619
- type: f1
value: 54.23517220069903
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 63.723825693265425
- type: f1
value: 64.02405729449103
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 54.310161547491006
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.77630088812064
- type: cos_sim_ap
value: 81.61725457333809
- type: cos_sim_f1
value: 74.91373801916932
- type: cos_sim_precision
value: 72.63940520446097
- type: cos_sim_recall
value: 77.33509234828496
- type: dot_accuracy
value: 88.77630088812064
- type: dot_ap
value: 81.61725317476251
- type: dot_f1
value: 74.91373801916932
- type: dot_precision
value: 72.63940520446097
- type: dot_recall
value: 77.33509234828496
- type: euclidean_accuracy
value: 88.77630088812064
- type: euclidean_ap
value: 81.61724596869566
- type: euclidean_f1
value: 74.91373801916932
- type: euclidean_precision
value: 72.63940520446097
- type: euclidean_recall
value: 77.33509234828496
- type: manhattan_accuracy
value: 88.67497168742922
- type: manhattan_ap
value: 81.430251048948
- type: manhattan_f1
value: 74.79593118171543
- type: manhattan_precision
value: 71.3635274382938
- type: manhattan_recall
value: 78.57519788918206
- type: max_accuracy
value: 88.77630088812064
- type: max_ap
value: 81.61725457333809
- type: max_f1
value: 74.91373801916932
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.85136026700819
- type: cos_sim_ap
value: 87.74656687446567
- type: cos_sim_f1
value: 80.3221673073403
- type: cos_sim_precision
value: 76.56871640957633
- type: cos_sim_recall
value: 84.46258084385587
- type: dot_accuracy
value: 89.85136026700819
- type: dot_ap
value: 87.74656471395072
- type: dot_f1
value: 80.3221673073403
- type: dot_precision
value: 76.56871640957633
- type: dot_recall
value: 84.46258084385587
- type: euclidean_accuracy
value: 89.85136026700819
- type: euclidean_ap
value: 87.74656885754466
- type: euclidean_f1
value: 80.3221673073403
- type: euclidean_precision
value: 76.56871640957633
- type: euclidean_recall
value: 84.46258084385587
- type: manhattan_accuracy
value: 89.86300306593705
- type: manhattan_ap
value: 87.78807479093082
- type: manhattan_f1
value: 80.31663429471911
- type: manhattan_precision
value: 76.63472970137772
- type: manhattan_recall
value: 84.3701878657222
- type: max_accuracy
value: 89.86300306593705
- type: max_ap
value: 87.78807479093082
- type: max_f1
value: 80.3221673073403
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 32.4
- type: map_at_10
value: 40.961999999999996
- type: map_at_100
value: 41.660000000000004
- type: map_at_1000
value: 41.721000000000004
- type: map_at_3
value: 38.550000000000004
- type: map_at_5
value: 40.06
- type: mrr_at_1
value: 32.4
- type: mrr_at_10
value: 40.961999999999996
- type: mrr_at_100
value: 41.660000000000004
- type: mrr_at_1000
value: 41.721000000000004
- type: mrr_at_3
value: 38.550000000000004
- type: mrr_at_5
value: 40.06
- type: ndcg_at_1
value: 32.4
- type: ndcg_at_10
value: 45.388
- type: ndcg_at_100
value: 49.012
- type: ndcg_at_1000
value: 50.659
- type: ndcg_at_3
value: 40.47
- type: ndcg_at_5
value: 43.232
- type: precision_at_1
value: 32.4
- type: precision_at_10
value: 5.94
- type: precision_at_100
value: 0.769
- type: precision_at_1000
value: 0.09
- type: precision_at_3
value: 15.333
- type: precision_at_5
value: 10.56
- type: recall_at_1
value: 32.4
- type: recall_at_10
value: 59.4
- type: recall_at_100
value: 76.9
- type: recall_at_1000
value: 90.0
- type: recall_at_3
value: 46.0
- type: recall_at_5
value: 52.800000000000004
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.94000000000001
- type: ap
value: 70.57373468481975
- type: f1
value: 85.26264784928323
---
## E5-mistral-7b-instruct
[Improving Text Embeddings with Large Language Models](https://arxiv.org/pdf/2401.00368.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
This model has 32 layers and the embedding size is 4096.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("intfloat/e5-mistral-7b-instruct")
# In case you want to reduce the maximum sequence length:
model.max_seq_length = 4096
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
query_embeddings = model.encode(queries, prompt_name="web_search_query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Have a look at [config_sentence_transformers.json](config_sentence_transformers.json) for the prompts that are pre-configured, such as `web_search_query`, `sts_query`, and `summarization_query`. Additionally, check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for prompts we used for evaluation. You can use these via e.g. `model.encode(queries, prompt="Instruct: Given a claim, find documents that refute the claim\nQuery: ")`.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-mistral-7b-instruct')
model = AutoModel.from_pretrained('intfloat/e5-mistral-7b-instruct')
max_length = 4096
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
and fine-tuned on a mixture of multilingual datasets.
As a result, it has some multilingual capability.
However, since Mistral-7B-v0.1 is mainly trained on English data, we recommend using this model for English only.
For multilingual use cases, please refer to [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large).
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## FAQ
**1. Do I need to add instructions to the query?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
The task definition should be a one-sentence instruction that describes the task.
This is a way to customize text embeddings for different scenarios through natural language instructions.
Please check out [unilm/e5/utils.py](https://github.com/microsoft/unilm/blob/9c0f1ff7ca53431fe47d2637dfe253643d94185b/e5/utils.py#L106) for instructions we used for evaluation.
On the other hand, there is no need to add instructions to the document side.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Where are the LoRA-only weights?**
You can find the LoRA-only weights at [https://huggingface.co/intfloat/e5-mistral-7b-instruct/tree/main/lora](https://huggingface.co/intfloat/e5-mistral-7b-instruct/tree/main/lora).
## Citation
If you find our paper or models helpful, please consider cite as follows:
```bibtex
@article{wang2023improving,
title={Improving Text Embeddings with Large Language Models},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2401.00368},
year={2023}
}
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
Using this model for inputs longer than 4096 tokens is not recommended.
This model's multilingual capability is still inferior to [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) for some cases.
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
BAAI/bge-large-en-v1.5 | BAAI | feature-extraction | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"en",
"arxiv:2401.03462",
"arxiv:2312.15503",
"arxiv:2311.13534",
"arxiv:2310.07554",
"arxiv:2309.07597",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,694 | 1,708 | 2,014,014 | 495 | ---
language:
- en
license: mit
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: bge-large-en-v1.5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.8507462686567
- type: ap
value: 38.566457320228245
- type: f1
value: 69.69386648043475
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 92.416675
- type: ap
value: 89.1928861155922
- type: f1
value: 92.39477019574215
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.175999999999995
- type: f1
value: 47.80712792870253
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.184999999999995
- type: map_at_10
value: 55.654
- type: map_at_100
value: 56.25
- type: map_at_1000
value: 56.255
- type: map_at_3
value: 51.742999999999995
- type: map_at_5
value: 54.129000000000005
- type: mrr_at_1
value: 40.967
- type: mrr_at_10
value: 55.96
- type: mrr_at_100
value: 56.54900000000001
- type: mrr_at_1000
value: 56.554
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.44
- type: ndcg_at_1
value: 40.184999999999995
- type: ndcg_at_10
value: 63.542
- type: ndcg_at_100
value: 65.96499999999999
- type: ndcg_at_1000
value: 66.08699999999999
- type: ndcg_at_3
value: 55.582
- type: ndcg_at_5
value: 59.855000000000004
- type: precision_at_1
value: 40.184999999999995
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.987
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.405
- type: recall_at_1
value: 40.184999999999995
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 98.72
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 77.027
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.567077926750066
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 43.19453389182364
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.46555939623092
- type: mrr
value: 77.82361605768807
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.9554128814735
- type: cos_sim_spearman
value: 84.65373612172036
- type: euclidean_pearson
value: 83.2905059954138
- type: euclidean_spearman
value: 84.52240782811128
- type: manhattan_pearson
value: 82.99533802997436
- type: manhattan_spearman
value: 84.20673798475734
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.78896103896103
- type: f1
value: 87.77189310964883
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.714538337650495
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.90108349284447
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.795
- type: map_at_10
value: 43.669000000000004
- type: map_at_100
value: 45.151
- type: map_at_1000
value: 45.278
- type: map_at_3
value: 40.006
- type: map_at_5
value: 42.059999999999995
- type: mrr_at_1
value: 39.771
- type: mrr_at_10
value: 49.826
- type: mrr_at_100
value: 50.504000000000005
- type: mrr_at_1000
value: 50.549
- type: mrr_at_3
value: 47.115
- type: mrr_at_5
value: 48.832
- type: ndcg_at_1
value: 39.771
- type: ndcg_at_10
value: 50.217999999999996
- type: ndcg_at_100
value: 55.454
- type: ndcg_at_1000
value: 57.37
- type: ndcg_at_3
value: 44.885000000000005
- type: ndcg_at_5
value: 47.419
- type: precision_at_1
value: 39.771
- type: precision_at_10
value: 9.642000000000001
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.268
- type: precision_at_5
value: 15.536
- type: recall_at_1
value: 32.795
- type: recall_at_10
value: 62.580999999999996
- type: recall_at_100
value: 84.438
- type: recall_at_1000
value: 96.492
- type: recall_at_3
value: 47.071000000000005
- type: recall_at_5
value: 54.079
- type: map_at_1
value: 32.671
- type: map_at_10
value: 43.334
- type: map_at_100
value: 44.566
- type: map_at_1000
value: 44.702999999999996
- type: map_at_3
value: 40.343
- type: map_at_5
value: 41.983
- type: mrr_at_1
value: 40.764
- type: mrr_at_10
value: 49.382
- type: mrr_at_100
value: 49.988
- type: mrr_at_1000
value: 50.03300000000001
- type: mrr_at_3
value: 47.293
- type: mrr_at_5
value: 48.51
- type: ndcg_at_1
value: 40.764
- type: ndcg_at_10
value: 49.039
- type: ndcg_at_100
value: 53.259
- type: ndcg_at_1000
value: 55.253
- type: ndcg_at_3
value: 45.091
- type: ndcg_at_5
value: 46.839999999999996
- type: precision_at_1
value: 40.764
- type: precision_at_10
value: 9.191
- type: precision_at_100
value: 1.476
- type: precision_at_1000
value: 0.19499999999999998
- type: precision_at_3
value: 21.72
- type: precision_at_5
value: 15.299
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 58.816
- type: recall_at_100
value: 76.654
- type: recall_at_1000
value: 89.05999999999999
- type: recall_at_3
value: 46.743
- type: recall_at_5
value: 51.783
- type: map_at_1
value: 40.328
- type: map_at_10
value: 53.32599999999999
- type: map_at_100
value: 54.37499999999999
- type: map_at_1000
value: 54.429
- type: map_at_3
value: 49.902
- type: map_at_5
value: 52.002
- type: mrr_at_1
value: 46.332
- type: mrr_at_10
value: 56.858
- type: mrr_at_100
value: 57.522
- type: mrr_at_1000
value: 57.54899999999999
- type: mrr_at_3
value: 54.472
- type: mrr_at_5
value: 55.996
- type: ndcg_at_1
value: 46.332
- type: ndcg_at_10
value: 59.313
- type: ndcg_at_100
value: 63.266999999999996
- type: ndcg_at_1000
value: 64.36
- type: ndcg_at_3
value: 53.815000000000005
- type: ndcg_at_5
value: 56.814
- type: precision_at_1
value: 46.332
- type: precision_at_10
value: 9.53
- type: precision_at_100
value: 1.238
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.054000000000002
- type: precision_at_5
value: 16.589000000000002
- type: recall_at_1
value: 40.328
- type: recall_at_10
value: 73.421
- type: recall_at_100
value: 90.059
- type: recall_at_1000
value: 97.81
- type: recall_at_3
value: 59.009
- type: recall_at_5
value: 66.352
- type: map_at_1
value: 27.424
- type: map_at_10
value: 36.332
- type: map_at_100
value: 37.347
- type: map_at_1000
value: 37.422
- type: map_at_3
value: 33.743
- type: map_at_5
value: 35.176
- type: mrr_at_1
value: 29.153000000000002
- type: mrr_at_10
value: 38.233
- type: mrr_at_100
value: 39.109
- type: mrr_at_1000
value: 39.164
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.169000000000004
- type: ndcg_at_1
value: 29.153000000000002
- type: ndcg_at_10
value: 41.439
- type: ndcg_at_100
value: 46.42
- type: ndcg_at_1000
value: 48.242000000000004
- type: ndcg_at_3
value: 36.362
- type: ndcg_at_5
value: 38.743
- type: precision_at_1
value: 29.153000000000002
- type: precision_at_10
value: 6.315999999999999
- type: precision_at_100
value: 0.927
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 15.443000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.424
- type: recall_at_10
value: 55.364000000000004
- type: recall_at_100
value: 78.211
- type: recall_at_1000
value: 91.74600000000001
- type: recall_at_3
value: 41.379
- type: recall_at_5
value: 47.14
- type: map_at_1
value: 19.601
- type: map_at_10
value: 27.826
- type: map_at_100
value: 29.017
- type: map_at_1000
value: 29.137
- type: map_at_3
value: 25.125999999999998
- type: map_at_5
value: 26.765
- type: mrr_at_1
value: 24.005000000000003
- type: mrr_at_10
value: 32.716
- type: mrr_at_100
value: 33.631
- type: mrr_at_1000
value: 33.694
- type: mrr_at_3
value: 29.934
- type: mrr_at_5
value: 31.630999999999997
- type: ndcg_at_1
value: 24.005000000000003
- type: ndcg_at_10
value: 33.158
- type: ndcg_at_100
value: 38.739000000000004
- type: ndcg_at_1000
value: 41.495
- type: ndcg_at_3
value: 28.185
- type: ndcg_at_5
value: 30.796
- type: precision_at_1
value: 24.005000000000003
- type: precision_at_10
value: 5.908
- type: precision_at_100
value: 1.005
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 13.391
- type: precision_at_5
value: 9.876
- type: recall_at_1
value: 19.601
- type: recall_at_10
value: 44.746
- type: recall_at_100
value: 68.82300000000001
- type: recall_at_1000
value: 88.215
- type: recall_at_3
value: 31.239
- type: recall_at_5
value: 37.695
- type: map_at_1
value: 30.130000000000003
- type: map_at_10
value: 40.96
- type: map_at_100
value: 42.282
- type: map_at_1000
value: 42.392
- type: map_at_3
value: 37.889
- type: map_at_5
value: 39.661
- type: mrr_at_1
value: 36.958999999999996
- type: mrr_at_10
value: 46.835
- type: mrr_at_100
value: 47.644
- type: mrr_at_1000
value: 47.688
- type: mrr_at_3
value: 44.562000000000005
- type: mrr_at_5
value: 45.938
- type: ndcg_at_1
value: 36.958999999999996
- type: ndcg_at_10
value: 47.06
- type: ndcg_at_100
value: 52.345
- type: ndcg_at_1000
value: 54.35
- type: ndcg_at_3
value: 42.301
- type: ndcg_at_5
value: 44.635999999999996
- type: precision_at_1
value: 36.958999999999996
- type: precision_at_10
value: 8.479000000000001
- type: precision_at_100
value: 1.284
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 20.244
- type: precision_at_5
value: 14.224999999999998
- type: recall_at_1
value: 30.130000000000003
- type: recall_at_10
value: 59.27
- type: recall_at_100
value: 81.195
- type: recall_at_1000
value: 94.21199999999999
- type: recall_at_3
value: 45.885
- type: recall_at_5
value: 52.016
- type: map_at_1
value: 26.169999999999998
- type: map_at_10
value: 36.451
- type: map_at_100
value: 37.791000000000004
- type: map_at_1000
value: 37.897
- type: map_at_3
value: 33.109
- type: map_at_5
value: 34.937000000000005
- type: mrr_at_1
value: 32.877
- type: mrr_at_10
value: 42.368
- type: mrr_at_100
value: 43.201
- type: mrr_at_1000
value: 43.259
- type: mrr_at_3
value: 39.763999999999996
- type: mrr_at_5
value: 41.260000000000005
- type: ndcg_at_1
value: 32.877
- type: ndcg_at_10
value: 42.659000000000006
- type: ndcg_at_100
value: 48.161
- type: ndcg_at_1000
value: 50.345
- type: ndcg_at_3
value: 37.302
- type: ndcg_at_5
value: 39.722
- type: precision_at_1
value: 32.877
- type: precision_at_10
value: 7.9
- type: precision_at_100
value: 1.236
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 17.846
- type: precision_at_5
value: 12.9
- type: recall_at_1
value: 26.169999999999998
- type: recall_at_10
value: 55.35
- type: recall_at_100
value: 78.755
- type: recall_at_1000
value: 93.518
- type: recall_at_3
value: 40.176
- type: recall_at_5
value: 46.589000000000006
- type: map_at_1
value: 27.15516666666667
- type: map_at_10
value: 36.65741666666667
- type: map_at_100
value: 37.84991666666666
- type: map_at_1000
value: 37.96316666666667
- type: map_at_3
value: 33.74974999999999
- type: map_at_5
value: 35.3765
- type: mrr_at_1
value: 32.08233333333334
- type: mrr_at_10
value: 41.033833333333334
- type: mrr_at_100
value: 41.84524999999999
- type: mrr_at_1000
value: 41.89983333333333
- type: mrr_at_3
value: 38.62008333333333
- type: mrr_at_5
value: 40.03441666666666
- type: ndcg_at_1
value: 32.08233333333334
- type: ndcg_at_10
value: 42.229
- type: ndcg_at_100
value: 47.26716666666667
- type: ndcg_at_1000
value: 49.43466666666667
- type: ndcg_at_3
value: 37.36408333333333
- type: ndcg_at_5
value: 39.6715
- type: precision_at_1
value: 32.08233333333334
- type: precision_at_10
value: 7.382583333333334
- type: precision_at_100
value: 1.16625
- type: precision_at_1000
value: 0.15408333333333332
- type: precision_at_3
value: 17.218
- type: precision_at_5
value: 12.21875
- type: recall_at_1
value: 27.15516666666667
- type: recall_at_10
value: 54.36683333333333
- type: recall_at_100
value: 76.37183333333333
- type: recall_at_1000
value: 91.26183333333333
- type: recall_at_3
value: 40.769916666666674
- type: recall_at_5
value: 46.702333333333335
- type: map_at_1
value: 25.749
- type: map_at_10
value: 33.001999999999995
- type: map_at_100
value: 33.891
- type: map_at_1000
value: 33.993
- type: map_at_3
value: 30.703999999999997
- type: map_at_5
value: 31.959
- type: mrr_at_1
value: 28.834
- type: mrr_at_10
value: 35.955
- type: mrr_at_100
value: 36.709
- type: mrr_at_1000
value: 36.779
- type: mrr_at_3
value: 33.947
- type: mrr_at_5
value: 35.089
- type: ndcg_at_1
value: 28.834
- type: ndcg_at_10
value: 37.329
- type: ndcg_at_100
value: 41.79
- type: ndcg_at_1000
value: 44.169000000000004
- type: ndcg_at_3
value: 33.184999999999995
- type: ndcg_at_5
value: 35.107
- type: precision_at_1
value: 28.834
- type: precision_at_10
value: 5.7669999999999995
- type: precision_at_100
value: 0.876
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 14.213000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 25.749
- type: recall_at_10
value: 47.791
- type: recall_at_100
value: 68.255
- type: recall_at_1000
value: 85.749
- type: recall_at_3
value: 36.199
- type: recall_at_5
value: 41.071999999999996
- type: map_at_1
value: 17.777
- type: map_at_10
value: 25.201
- type: map_at_100
value: 26.423999999999996
- type: map_at_1000
value: 26.544
- type: map_at_3
value: 22.869
- type: map_at_5
value: 24.023
- type: mrr_at_1
value: 21.473
- type: mrr_at_10
value: 29.12
- type: mrr_at_100
value: 30.144
- type: mrr_at_1000
value: 30.215999999999998
- type: mrr_at_3
value: 26.933
- type: mrr_at_5
value: 28.051
- type: ndcg_at_1
value: 21.473
- type: ndcg_at_10
value: 30.003
- type: ndcg_at_100
value: 35.766
- type: ndcg_at_1000
value: 38.501000000000005
- type: ndcg_at_3
value: 25.773000000000003
- type: ndcg_at_5
value: 27.462999999999997
- type: precision_at_1
value: 21.473
- type: precision_at_10
value: 5.482
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.205
- type: precision_at_5
value: 8.692
- type: recall_at_1
value: 17.777
- type: recall_at_10
value: 40.582
- type: recall_at_100
value: 66.305
- type: recall_at_1000
value: 85.636
- type: recall_at_3
value: 28.687
- type: recall_at_5
value: 33.089
- type: map_at_1
value: 26.677
- type: map_at_10
value: 36.309000000000005
- type: map_at_100
value: 37.403999999999996
- type: map_at_1000
value: 37.496
- type: map_at_3
value: 33.382
- type: map_at_5
value: 34.98
- type: mrr_at_1
value: 31.343
- type: mrr_at_10
value: 40.549
- type: mrr_at_100
value: 41.342
- type: mrr_at_1000
value: 41.397
- type: mrr_at_3
value: 38.029
- type: mrr_at_5
value: 39.451
- type: ndcg_at_1
value: 31.343
- type: ndcg_at_10
value: 42.1
- type: ndcg_at_100
value: 47.089999999999996
- type: ndcg_at_1000
value: 49.222
- type: ndcg_at_3
value: 36.836999999999996
- type: ndcg_at_5
value: 39.21
- type: precision_at_1
value: 31.343
- type: precision_at_10
value: 7.164
- type: precision_at_100
value: 1.0959999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.915
- type: precision_at_5
value: 11.940000000000001
- type: recall_at_1
value: 26.677
- type: recall_at_10
value: 55.54599999999999
- type: recall_at_100
value: 77.094
- type: recall_at_1000
value: 92.01
- type: recall_at_3
value: 41.191
- type: recall_at_5
value: 47.006
- type: map_at_1
value: 24.501
- type: map_at_10
value: 33.102
- type: map_at_100
value: 34.676
- type: map_at_1000
value: 34.888000000000005
- type: map_at_3
value: 29.944
- type: map_at_5
value: 31.613999999999997
- type: mrr_at_1
value: 29.447000000000003
- type: mrr_at_10
value: 37.996
- type: mrr_at_100
value: 38.946
- type: mrr_at_1000
value: 38.995000000000005
- type: mrr_at_3
value: 35.079
- type: mrr_at_5
value: 36.69
- type: ndcg_at_1
value: 29.447000000000003
- type: ndcg_at_10
value: 39.232
- type: ndcg_at_100
value: 45.247
- type: ndcg_at_1000
value: 47.613
- type: ndcg_at_3
value: 33.922999999999995
- type: ndcg_at_5
value: 36.284
- type: precision_at_1
value: 29.447000000000003
- type: precision_at_10
value: 7.648000000000001
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 16.008
- type: precision_at_5
value: 11.779
- type: recall_at_1
value: 24.501
- type: recall_at_10
value: 51.18899999999999
- type: recall_at_100
value: 78.437
- type: recall_at_1000
value: 92.842
- type: recall_at_3
value: 35.808
- type: recall_at_5
value: 42.197
- type: map_at_1
value: 22.039
- type: map_at_10
value: 30.377
- type: map_at_100
value: 31.275
- type: map_at_1000
value: 31.379
- type: map_at_3
value: 27.98
- type: map_at_5
value: 29.358
- type: mrr_at_1
value: 24.03
- type: mrr_at_10
value: 32.568000000000005
- type: mrr_at_100
value: 33.403
- type: mrr_at_1000
value: 33.475
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 31.796000000000003
- type: ndcg_at_1
value: 24.03
- type: ndcg_at_10
value: 35.198
- type: ndcg_at_100
value: 39.668
- type: ndcg_at_1000
value: 42.296
- type: ndcg_at_3
value: 30.709999999999997
- type: ndcg_at_5
value: 33.024
- type: precision_at_1
value: 24.03
- type: precision_at_10
value: 5.564
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 13.309000000000001
- type: precision_at_5
value: 9.39
- type: recall_at_1
value: 22.039
- type: recall_at_10
value: 47.746
- type: recall_at_100
value: 68.23599999999999
- type: recall_at_1000
value: 87.852
- type: recall_at_3
value: 35.852000000000004
- type: recall_at_5
value: 41.410000000000004
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.692999999999998
- type: map_at_10
value: 26.903
- type: map_at_100
value: 28.987000000000002
- type: map_at_1000
value: 29.176999999999996
- type: map_at_3
value: 22.137
- type: map_at_5
value: 24.758
- type: mrr_at_1
value: 35.57
- type: mrr_at_10
value: 47.821999999999996
- type: mrr_at_100
value: 48.608000000000004
- type: mrr_at_1000
value: 48.638999999999996
- type: mrr_at_3
value: 44.452000000000005
- type: mrr_at_5
value: 46.546
- type: ndcg_at_1
value: 35.57
- type: ndcg_at_10
value: 36.567
- type: ndcg_at_100
value: 44.085
- type: ndcg_at_1000
value: 47.24
- type: ndcg_at_3
value: 29.964000000000002
- type: ndcg_at_5
value: 32.511
- type: precision_at_1
value: 35.57
- type: precision_at_10
value: 11.485
- type: precision_at_100
value: 1.9619999999999997
- type: precision_at_1000
value: 0.256
- type: precision_at_3
value: 22.237000000000002
- type: precision_at_5
value: 17.471999999999998
- type: recall_at_1
value: 15.692999999999998
- type: recall_at_10
value: 43.056
- type: recall_at_100
value: 68.628
- type: recall_at_1000
value: 86.075
- type: recall_at_3
value: 26.918999999999997
- type: recall_at_5
value: 34.14
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.53
- type: map_at_10
value: 20.951
- type: map_at_100
value: 30.136000000000003
- type: map_at_1000
value: 31.801000000000002
- type: map_at_3
value: 15.021
- type: map_at_5
value: 17.471999999999998
- type: mrr_at_1
value: 71.0
- type: mrr_at_10
value: 79.176
- type: mrr_at_100
value: 79.418
- type: mrr_at_1000
value: 79.426
- type: mrr_at_3
value: 78.125
- type: mrr_at_5
value: 78.61200000000001
- type: ndcg_at_1
value: 58.5
- type: ndcg_at_10
value: 44.106
- type: ndcg_at_100
value: 49.268
- type: ndcg_at_1000
value: 56.711999999999996
- type: ndcg_at_3
value: 48.934
- type: ndcg_at_5
value: 45.826
- type: precision_at_1
value: 71.0
- type: precision_at_10
value: 35.0
- type: precision_at_100
value: 11.360000000000001
- type: precision_at_1000
value: 2.046
- type: precision_at_3
value: 52.833
- type: precision_at_5
value: 44.15
- type: recall_at_1
value: 9.53
- type: recall_at_10
value: 26.811
- type: recall_at_100
value: 55.916999999999994
- type: recall_at_1000
value: 79.973
- type: recall_at_3
value: 16.413
- type: recall_at_5
value: 19.980999999999998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.519999999999996
- type: f1
value: 46.36601294761231
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.413
- type: map_at_10
value: 83.414
- type: map_at_100
value: 83.621
- type: map_at_1000
value: 83.635
- type: map_at_3
value: 82.337
- type: map_at_5
value: 83.039
- type: mrr_at_1
value: 80.19800000000001
- type: mrr_at_10
value: 87.715
- type: mrr_at_100
value: 87.778
- type: mrr_at_1000
value: 87.779
- type: mrr_at_3
value: 87.106
- type: mrr_at_5
value: 87.555
- type: ndcg_at_1
value: 80.19800000000001
- type: ndcg_at_10
value: 87.182
- type: ndcg_at_100
value: 87.90299999999999
- type: ndcg_at_1000
value: 88.143
- type: ndcg_at_3
value: 85.60600000000001
- type: ndcg_at_5
value: 86.541
- type: precision_at_1
value: 80.19800000000001
- type: precision_at_10
value: 10.531
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.933
- type: precision_at_5
value: 20.429
- type: recall_at_1
value: 74.413
- type: recall_at_10
value: 94.363
- type: recall_at_100
value: 97.165
- type: recall_at_1000
value: 98.668
- type: recall_at_3
value: 90.108
- type: recall_at_5
value: 92.52
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.701
- type: map_at_10
value: 37.122
- type: map_at_100
value: 39.178000000000004
- type: map_at_1000
value: 39.326
- type: map_at_3
value: 32.971000000000004
- type: map_at_5
value: 35.332
- type: mrr_at_1
value: 44.753
- type: mrr_at_10
value: 53.452
- type: mrr_at_100
value: 54.198
- type: mrr_at_1000
value: 54.225
- type: mrr_at_3
value: 50.952
- type: mrr_at_5
value: 52.464
- type: ndcg_at_1
value: 44.753
- type: ndcg_at_10
value: 45.021
- type: ndcg_at_100
value: 52.028
- type: ndcg_at_1000
value: 54.596000000000004
- type: ndcg_at_3
value: 41.622
- type: ndcg_at_5
value: 42.736000000000004
- type: precision_at_1
value: 44.753
- type: precision_at_10
value: 12.284
- type: precision_at_100
value: 1.955
- type: precision_at_1000
value: 0.243
- type: precision_at_3
value: 27.828999999999997
- type: precision_at_5
value: 20.061999999999998
- type: recall_at_1
value: 22.701
- type: recall_at_10
value: 51.432
- type: recall_at_100
value: 77.009
- type: recall_at_1000
value: 92.511
- type: recall_at_3
value: 37.919000000000004
- type: recall_at_5
value: 44.131
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.189
- type: map_at_10
value: 66.24600000000001
- type: map_at_100
value: 67.098
- type: map_at_1000
value: 67.149
- type: map_at_3
value: 62.684
- type: map_at_5
value: 64.974
- type: mrr_at_1
value: 80.378
- type: mrr_at_10
value: 86.127
- type: mrr_at_100
value: 86.29299999999999
- type: mrr_at_1000
value: 86.297
- type: mrr_at_3
value: 85.31400000000001
- type: mrr_at_5
value: 85.858
- type: ndcg_at_1
value: 80.378
- type: ndcg_at_10
value: 74.101
- type: ndcg_at_100
value: 76.993
- type: ndcg_at_1000
value: 77.948
- type: ndcg_at_3
value: 69.232
- type: ndcg_at_5
value: 72.04599999999999
- type: precision_at_1
value: 80.378
- type: precision_at_10
value: 15.595999999999998
- type: precision_at_100
value: 1.7840000000000003
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 44.884
- type: precision_at_5
value: 29.145
- type: recall_at_1
value: 40.189
- type: recall_at_10
value: 77.981
- type: recall_at_100
value: 89.21
- type: recall_at_1000
value: 95.48299999999999
- type: recall_at_3
value: 67.326
- type: recall_at_5
value: 72.863
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 92.84599999999999
- type: ap
value: 89.4710787567357
- type: f1
value: 92.83752676932258
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.132
- type: map_at_10
value: 35.543
- type: map_at_100
value: 36.702
- type: map_at_1000
value: 36.748999999999995
- type: map_at_3
value: 31.737
- type: map_at_5
value: 33.927
- type: mrr_at_1
value: 23.782
- type: mrr_at_10
value: 36.204
- type: mrr_at_100
value: 37.29
- type: mrr_at_1000
value: 37.330999999999996
- type: mrr_at_3
value: 32.458999999999996
- type: mrr_at_5
value: 34.631
- type: ndcg_at_1
value: 23.782
- type: ndcg_at_10
value: 42.492999999999995
- type: ndcg_at_100
value: 47.985
- type: ndcg_at_1000
value: 49.141
- type: ndcg_at_3
value: 34.748000000000005
- type: ndcg_at_5
value: 38.651
- type: precision_at_1
value: 23.782
- type: precision_at_10
value: 6.665
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.776
- type: precision_at_5
value: 10.84
- type: recall_at_1
value: 23.132
- type: recall_at_10
value: 63.794
- type: recall_at_100
value: 89.027
- type: recall_at_1000
value: 97.807
- type: recall_at_3
value: 42.765
- type: recall_at_5
value: 52.11
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.59188326493388
- type: f1
value: 94.3842594786827
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.49384404924761
- type: f1
value: 59.7580539534629
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.56220578345663
- type: f1
value: 75.27228165561478
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.53463349024884
- type: f1
value: 80.4893958236536
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.56100273484962
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.470380028839607
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.06102792457849
- type: mrr
value: 33.30709199672238
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.776999999999999
- type: map_at_10
value: 14.924000000000001
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.538999999999998
- type: map_at_3
value: 10.982
- type: map_at_5
value: 12.679000000000002
- type: mrr_at_1
value: 47.988
- type: mrr_at_10
value: 57.232000000000006
- type: mrr_at_100
value: 57.818999999999996
- type: mrr_at_1000
value: 57.847
- type: mrr_at_3
value: 54.901999999999994
- type: mrr_at_5
value: 56.481
- type: ndcg_at_1
value: 46.594
- type: ndcg_at_10
value: 38.129000000000005
- type: ndcg_at_100
value: 35.54
- type: ndcg_at_1000
value: 44.172
- type: ndcg_at_3
value: 43.025999999999996
- type: ndcg_at_5
value: 41.052
- type: precision_at_1
value: 47.988
- type: precision_at_10
value: 28.111000000000004
- type: precision_at_100
value: 8.929
- type: precision_at_1000
value: 2.185
- type: precision_at_3
value: 40.144000000000005
- type: precision_at_5
value: 35.232
- type: recall_at_1
value: 6.776999999999999
- type: recall_at_10
value: 19.289
- type: recall_at_100
value: 36.359
- type: recall_at_1000
value: 67.54
- type: recall_at_3
value: 11.869
- type: recall_at_5
value: 14.999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.108000000000004
- type: map_at_10
value: 47.126000000000005
- type: map_at_100
value: 48.171
- type: map_at_1000
value: 48.199
- type: map_at_3
value: 42.734
- type: map_at_5
value: 45.362
- type: mrr_at_1
value: 34.936
- type: mrr_at_10
value: 49.571
- type: mrr_at_100
value: 50.345
- type: mrr_at_1000
value: 50.363
- type: mrr_at_3
value: 45.959
- type: mrr_at_5
value: 48.165
- type: ndcg_at_1
value: 34.936
- type: ndcg_at_10
value: 55.028999999999996
- type: ndcg_at_100
value: 59.244
- type: ndcg_at_1000
value: 59.861
- type: ndcg_at_3
value: 46.872
- type: ndcg_at_5
value: 51.217999999999996
- type: precision_at_1
value: 34.936
- type: precision_at_10
value: 9.099
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 21.456
- type: precision_at_5
value: 15.411
- type: recall_at_1
value: 31.108000000000004
- type: recall_at_10
value: 76.53999999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.947
- type: recall_at_3
value: 55.572
- type: recall_at_5
value: 65.525
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.56400000000001
- type: map_at_10
value: 85.482
- type: map_at_100
value: 86.114
- type: map_at_1000
value: 86.13
- type: map_at_3
value: 82.607
- type: map_at_5
value: 84.405
- type: mrr_at_1
value: 82.42
- type: mrr_at_10
value: 88.304
- type: mrr_at_100
value: 88.399
- type: mrr_at_1000
value: 88.399
- type: mrr_at_3
value: 87.37
- type: mrr_at_5
value: 88.024
- type: ndcg_at_1
value: 82.45
- type: ndcg_at_10
value: 89.06500000000001
- type: ndcg_at_100
value: 90.232
- type: ndcg_at_1000
value: 90.305
- type: ndcg_at_3
value: 86.375
- type: ndcg_at_5
value: 87.85300000000001
- type: precision_at_1
value: 82.45
- type: precision_at_10
value: 13.486999999999998
- type: precision_at_100
value: 1.534
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.813
- type: precision_at_5
value: 24.773999999999997
- type: recall_at_1
value: 71.56400000000001
- type: recall_at_10
value: 95.812
- type: recall_at_100
value: 99.7
- type: recall_at_1000
value: 99.979
- type: recall_at_3
value: 87.966
- type: recall_at_5
value: 92.268
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 57.241876648614145
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.66212576446223
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.308
- type: map_at_10
value: 13.803
- type: map_at_100
value: 16.176
- type: map_at_1000
value: 16.561
- type: map_at_3
value: 9.761000000000001
- type: map_at_5
value: 11.802
- type: mrr_at_1
value: 26.200000000000003
- type: mrr_at_10
value: 37.621
- type: mrr_at_100
value: 38.767
- type: mrr_at_1000
value: 38.815
- type: mrr_at_3
value: 34.117
- type: mrr_at_5
value: 36.107
- type: ndcg_at_1
value: 26.200000000000003
- type: ndcg_at_10
value: 22.64
- type: ndcg_at_100
value: 31.567
- type: ndcg_at_1000
value: 37.623
- type: ndcg_at_3
value: 21.435000000000002
- type: ndcg_at_5
value: 18.87
- type: precision_at_1
value: 26.200000000000003
- type: precision_at_10
value: 11.74
- type: precision_at_100
value: 2.465
- type: precision_at_1000
value: 0.391
- type: precision_at_3
value: 20.033
- type: precision_at_5
value: 16.64
- type: recall_at_1
value: 5.308
- type: recall_at_10
value: 23.794999999999998
- type: recall_at_100
value: 50.015
- type: recall_at_1000
value: 79.283
- type: recall_at_3
value: 12.178
- type: recall_at_5
value: 16.882
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.93231134675553
- type: cos_sim_spearman
value: 81.68319292603205
- type: euclidean_pearson
value: 81.8396814380367
- type: euclidean_spearman
value: 81.24641903349945
- type: manhattan_pearson
value: 81.84698799204274
- type: manhattan_spearman
value: 81.24269997904105
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.73241671587446
- type: cos_sim_spearman
value: 79.05091082971826
- type: euclidean_pearson
value: 83.91146869578044
- type: euclidean_spearman
value: 79.87978465370936
- type: manhattan_pearson
value: 83.90888338917678
- type: manhattan_spearman
value: 79.87482848584241
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 85.14970731146177
- type: cos_sim_spearman
value: 86.37363490084627
- type: euclidean_pearson
value: 83.02154218530433
- type: euclidean_spearman
value: 83.80258761957367
- type: manhattan_pearson
value: 83.01664495119347
- type: manhattan_spearman
value: 83.77567458007952
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.40474139886784
- type: cos_sim_spearman
value: 82.77768789165984
- type: euclidean_pearson
value: 80.7065877443695
- type: euclidean_spearman
value: 81.375940662505
- type: manhattan_pearson
value: 80.6507552270278
- type: manhattan_spearman
value: 81.32782179098741
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.08585968722274
- type: cos_sim_spearman
value: 88.03110031451399
- type: euclidean_pearson
value: 85.74012019602384
- type: euclidean_spearman
value: 86.13592849438209
- type: manhattan_pearson
value: 85.74404842369206
- type: manhattan_spearman
value: 86.14492318960154
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.95069052788875
- type: cos_sim_spearman
value: 86.4867991595147
- type: euclidean_pearson
value: 84.31013325754635
- type: euclidean_spearman
value: 85.01529258006482
- type: manhattan_pearson
value: 84.26995570085374
- type: manhattan_spearman
value: 84.96982104986162
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.54617647971897
- type: cos_sim_spearman
value: 87.49834181751034
- type: euclidean_pearson
value: 86.01015322577122
- type: euclidean_spearman
value: 84.63362652063199
- type: manhattan_pearson
value: 86.13807574475706
- type: manhattan_spearman
value: 84.7772370721132
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.20047755786615
- type: cos_sim_spearman
value: 67.05324077987636
- type: euclidean_pearson
value: 66.91930642976601
- type: euclidean_spearman
value: 65.21491856099105
- type: manhattan_pearson
value: 66.78756851976624
- type: manhattan_spearman
value: 65.12356257740728
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.19852871539686
- type: cos_sim_spearman
value: 87.5161895296395
- type: euclidean_pearson
value: 84.59848645207485
- type: euclidean_spearman
value: 85.26427328757919
- type: manhattan_pearson
value: 84.59747366996524
- type: manhattan_spearman
value: 85.24045855146915
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.63320317811032
- type: mrr
value: 96.26242947321379
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.928000000000004
- type: map_at_10
value: 70.112
- type: map_at_100
value: 70.59299999999999
- type: map_at_1000
value: 70.623
- type: map_at_3
value: 66.846
- type: map_at_5
value: 68.447
- type: mrr_at_1
value: 64.0
- type: mrr_at_10
value: 71.212
- type: mrr_at_100
value: 71.616
- type: mrr_at_1000
value: 71.64500000000001
- type: mrr_at_3
value: 68.77799999999999
- type: mrr_at_5
value: 70.094
- type: ndcg_at_1
value: 64.0
- type: ndcg_at_10
value: 74.607
- type: ndcg_at_100
value: 76.416
- type: ndcg_at_1000
value: 77.102
- type: ndcg_at_3
value: 69.126
- type: ndcg_at_5
value: 71.41300000000001
- type: precision_at_1
value: 64.0
- type: precision_at_10
value: 9.933
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.556
- type: precision_at_5
value: 17.467
- type: recall_at_1
value: 60.928000000000004
- type: recall_at_10
value: 87.322
- type: recall_at_100
value: 94.833
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 72.628
- type: recall_at_5
value: 78.428
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.86237623762376
- type: cos_sim_ap
value: 96.72586477206649
- type: cos_sim_f1
value: 93.01858362631845
- type: cos_sim_precision
value: 93.4409687184662
- type: cos_sim_recall
value: 92.60000000000001
- type: dot_accuracy
value: 99.78019801980199
- type: dot_ap
value: 93.72748205246228
- type: dot_f1
value: 89.04109589041096
- type: dot_precision
value: 87.16475095785441
- type: dot_recall
value: 91.0
- type: euclidean_accuracy
value: 99.85445544554456
- type: euclidean_ap
value: 96.6661459876145
- type: euclidean_f1
value: 92.58337481333997
- type: euclidean_precision
value: 92.17046580773042
- type: euclidean_recall
value: 93.0
- type: manhattan_accuracy
value: 99.85445544554456
- type: manhattan_ap
value: 96.6883549244056
- type: manhattan_f1
value: 92.57598405580468
- type: manhattan_precision
value: 92.25422045680239
- type: manhattan_recall
value: 92.9
- type: max_accuracy
value: 99.86237623762376
- type: max_ap
value: 96.72586477206649
- type: max_f1
value: 93.01858362631845
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.39930057069995
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.96398659903402
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.946944700355395
- type: mrr
value: 56.97151398438164
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.541657650692905
- type: cos_sim_spearman
value: 31.605804192286303
- type: dot_pearson
value: 28.26905996736398
- type: dot_spearman
value: 27.864801765851187
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22599999999999998
- type: map_at_10
value: 1.8870000000000002
- type: map_at_100
value: 9.78
- type: map_at_1000
value: 22.514
- type: map_at_3
value: 0.6669999999999999
- type: map_at_5
value: 1.077
- type: mrr_at_1
value: 82.0
- type: mrr_at_10
value: 89.86699999999999
- type: mrr_at_100
value: 89.86699999999999
- type: mrr_at_1000
value: 89.86699999999999
- type: mrr_at_3
value: 89.667
- type: mrr_at_5
value: 89.667
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 74.818
- type: ndcg_at_100
value: 53.715999999999994
- type: ndcg_at_1000
value: 47.082
- type: ndcg_at_3
value: 82.134
- type: ndcg_at_5
value: 79.81899999999999
- type: precision_at_1
value: 82.0
- type: precision_at_10
value: 78.0
- type: precision_at_100
value: 54.48
- type: precision_at_1000
value: 20.518
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 85.2
- type: recall_at_1
value: 0.22599999999999998
- type: recall_at_10
value: 2.072
- type: recall_at_100
value: 13.013
- type: recall_at_1000
value: 43.462
- type: recall_at_3
value: 0.695
- type: recall_at_5
value: 1.139
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.328
- type: map_at_10
value: 9.795
- type: map_at_100
value: 15.801000000000002
- type: map_at_1000
value: 17.23
- type: map_at_3
value: 4.734
- type: map_at_5
value: 6.644
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 46.902
- type: mrr_at_100
value: 47.495
- type: mrr_at_1000
value: 47.495
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 44.218
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 24.806
- type: ndcg_at_100
value: 36.419000000000004
- type: ndcg_at_1000
value: 47.272999999999996
- type: ndcg_at_3
value: 25.666
- type: ndcg_at_5
value: 25.448999999999998
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 23.061
- type: precision_at_100
value: 7.714
- type: precision_at_1000
value: 1.484
- type: precision_at_3
value: 26.531
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.328
- type: recall_at_10
value: 16.524
- type: recall_at_100
value: 47.179
- type: recall_at_1000
value: 81.22200000000001
- type: recall_at_3
value: 5.745
- type: recall_at_5
value: 9.339
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 70.9142
- type: ap
value: 14.335574772555415
- type: f1
value: 54.62839595194111
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.94340690435768
- type: f1
value: 60.286487936731916
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.26597708987974
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.48882398521786
- type: cos_sim_ap
value: 79.04326607602204
- type: cos_sim_f1
value: 71.64566826860633
- type: cos_sim_precision
value: 70.55512918905092
- type: cos_sim_recall
value: 72.77044854881267
- type: dot_accuracy
value: 84.19264469213805
- type: dot_ap
value: 67.96360043562528
- type: dot_f1
value: 64.06418393006827
- type: dot_precision
value: 58.64941898706424
- type: dot_recall
value: 70.58047493403694
- type: euclidean_accuracy
value: 87.45902127913214
- type: euclidean_ap
value: 78.9742237648272
- type: euclidean_f1
value: 71.5553235908142
- type: euclidean_precision
value: 70.77955601445535
- type: euclidean_recall
value: 72.34828496042216
- type: manhattan_accuracy
value: 87.41729749061214
- type: manhattan_ap
value: 78.90073137580596
- type: manhattan_f1
value: 71.3942611553533
- type: manhattan_precision
value: 68.52705653967483
- type: manhattan_recall
value: 74.51187335092348
- type: max_accuracy
value: 87.48882398521786
- type: max_ap
value: 79.04326607602204
- type: max_f1
value: 71.64566826860633
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.68125897465751
- type: cos_sim_ap
value: 85.6003454431979
- type: cos_sim_f1
value: 77.6957163958641
- type: cos_sim_precision
value: 73.0110366307807
- type: cos_sim_recall
value: 83.02279026793964
- type: dot_accuracy
value: 87.7672992587418
- type: dot_ap
value: 82.4971301112899
- type: dot_f1
value: 75.90528233151184
- type: dot_precision
value: 72.0370626469368
- type: dot_recall
value: 80.21250384970742
- type: euclidean_accuracy
value: 88.4503434625684
- type: euclidean_ap
value: 84.91949884748384
- type: euclidean_f1
value: 76.92365018444684
- type: euclidean_precision
value: 74.53245721712759
- type: euclidean_recall
value: 79.47336002463813
- type: manhattan_accuracy
value: 88.47556952691427
- type: manhattan_ap
value: 84.8963689101517
- type: manhattan_f1
value: 76.85901249256395
- type: manhattan_precision
value: 74.31693989071039
- type: manhattan_recall
value: 79.58115183246073
- type: max_accuracy
value: 88.68125897465751
- type: max_ap
value: 85.6003454431979
- type: max_f1
value: 77.6957163958641
---
<h1 align="center">FlagEmbedding</h1>
<h4 align="center">
<p>
<a href=#model-list>Model List</a> |
<a href=#frequently-asked-questions>FAQ</a> |
<a href=#usage>Usage</a> |
<a href="#evaluation">Evaluation</a> |
<a href="#train">Train</a> |
<a href="#contact">Contact</a> |
<a href="#citation">Citation</a> |
<a href="#license">License</a>
<p>
</h4>
For more details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3).
[English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
- **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
- **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
- **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
- **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
- **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
## News
- 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
It is the first embedding model that supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
[Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
- 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
- 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire:
- 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
- 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
- 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) and [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
- 09/12/2023: New models:
- **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
- **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
<details>
<summary>More</summary>
<!-- ### More -->
- 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
- 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
- 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
- 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
- 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
</details>
## Model List
`bge` is short for `BAAI general embedding`.
| Model | Language | | Description | query instruction for retrieval [1] |
|:-------------------------------|:--------:| :--------:| :--------:|:--------:|
| [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
| [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
[1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
[2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
## Frequently asked questions
<details>
<summary>1. How to fine-tune bge embedding model?</summary>
<!-- ### How to fine-tune bge embedding model? -->
Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
Some suggestions:
- Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
- If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
- If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
</details>
<details>
<summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
<!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
**Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
Since we finetune the models by contrastive learning with a temperature of 0.01,
the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
For downstream tasks, such as passage retrieval or semantic similarity,
**what matters is the relative order of the scores, not the absolute value.**
If you need to filter similar sentences based on a similarity threshold,
please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
</details>
<details>
<summary>3. When does the query instruction need to be used</summary>
<!-- ### When does the query instruction need to be used -->
For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
No instruction only has a slight degradation in retrieval performance compared with using instruction.
So you can generate embedding without instruction in all cases for convenience.
For a retrieval task that uses short queries to find long related documents,
it is recommended to add instructions for these short queries.
**The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
In all cases, the documents/passages do not need to add the instruction.
</details>
## Usage
### Usage for Embedding Model
Here are some examples for using `bge` models with
[FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
```python
from FlagEmbedding import FlagModel
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = FlagModel('BAAI/bge-large-zh-v1.5',
query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
embeddings_1 = model.encode(sentences_1)
embeddings_2 = model.encode(sentences_2)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
# for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
# corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
q_embeddings = model.encode_queries(queries)
p_embeddings = model.encode(passages)
scores = q_embeddings @ p_embeddings.T
```
For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
#### Using Sentence-Transformers
You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
```
pip install -U sentence-transformers
```
```python
from sentence_transformers import SentenceTransformer
sentences_1 = ["样例数据-1", "样例数据-2"]
sentences_2 = ["样例数据-3", "样例数据-4"]
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
```
For s2p(short query to long passage) retrieval task,
each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
But the instruction is not needed for passages.
```python
from sentence_transformers import SentenceTransformer
queries = ['query_1', 'query_2']
passages = ["样例文档-1", "样例文档-2"]
instruction = "为这个句子生成表示以用于检索相关文章:"
model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
p_embeddings = model.encode(passages, normalize_embeddings=True)
scores = q_embeddings @ p_embeddings.T
```
#### Using Langchain
You can use `bge` in langchain like this:
```python
from langchain.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-large-en-v1.5"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
model = HuggingFaceBgeEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
query_instruction="为这个句子生成表示以用于检索相关文章:"
)
model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
```
#### Using HuggingFace Transformers
With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
```python
from transformers import AutoTokenizer, AutoModel
import torch
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
model.eval()
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
print("Sentence embeddings:", sentence_embeddings)
```
#### Usage of the ONNX files
```python
from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-en-v1.5')
model = AutoModel.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13")
model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-large-en-v1.5', revision="refs/pr/13",file_name="onnx/model.onnx")
# Sentences we want sentence embeddings for
sentences = ["样例数据-1", "样例数据-2"]
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
# encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
model_output_ort = model_ort(**encoded_input)
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# model_output and model_output_ort are identical
```
Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package.
```python
import asyncio
from infinity_emb import AsyncEmbeddingEngine, EngineArgs
sentences = ["Embed this is sentence via Infinity.", "Paris is in France."]
engine = AsyncEmbeddingEngine.from_args(
EngineArgs(model_name_or_path = "BAAI/bge-large-en-v1.5", device="cpu", engine="optimum" # or engine="torch"
))
async def main():
async with engine:
embeddings, usage = await engine.embed(sentences=sentences)
asyncio.run(main())
```
### Usage for Reranker
Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
You can get a relevance score by inputting query and passage to the reranker.
The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
#### Using FlagEmbedding
```
pip install -U FlagEmbedding
```
Get relevance scores (higher scores indicate more relevance):
```python
from FlagEmbedding import FlagReranker
reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
score = reranker.compute_score(['query', 'passage'])
print(score)
scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
print(scores)
```
#### Using Huggingface transformers
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
model.eval()
pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
with torch.no_grad():
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores)
```
## Evaluation
`baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
- **MTEB**:
| Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
|:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
| [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
| [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
| [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
| [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
| [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
| [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
| [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
| [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
| [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
| [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
| [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
| [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
| [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
| [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
| [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
| [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
- **C-MTEB**:
We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
| Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
| [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
| [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
| [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
| [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
| [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
| [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
| [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
| [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
| [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
| [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
| [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
| [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
| [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
- **Reranking**:
See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
| Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
| multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
| multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
| multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
| m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
| m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
| bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
| bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
\* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
## Train
### BAAI Embedding
We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
**You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
### BGE Reranker
Cross-encoder will perform full-attention over the input pair,
which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
Therefore, it can be used to re-rank the top-k documents returned by embedding model.
We train the cross-encoder on a multilingual pair data,
The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
## Contact
If you have any question or suggestion related to this project, feel free to open an issue or pull request.
You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]).
## Citation
If you find this repository useful, please consider giving a star :star: and citation
```
@misc{bge_embedding,
title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
year={2023},
eprint={2309.07597},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## License
FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
| [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BEAR",
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Alibaba-NLP/gme-Qwen2-VL-2B-Instruct | Alibaba-NLP | sentence-similarity | [
"sentence-transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"mteb",
"transformers",
"Qwen2-VL",
"sentence-similarity",
"vidore",
"en",
"zh",
"arxiv:2412.16855",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,734 | 1,737 | 65,883 | 47 | ---
base_model:
- Qwen/Qwen2-VL-2B-Instruct
language:
- en
- zh
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2-VL
- sentence-similarity
- vidore
model-index:
- name: external
results:
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 61.03190209456061
- type: cos_sim_spearman
value: 67.54853383020948
- type: euclidean_pearson
value: 65.38958681599493
- type: euclidean_spearman
value: 67.54853383020948
- type: manhattan_pearson
value: 65.25341659273157
- type: manhattan_spearman
value: 67.34190190683134
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 50.83794357648487
- type: cos_sim_spearman
value: 54.03230997664373
- type: euclidean_pearson
value: 55.2072028123375
- type: euclidean_spearman
value: 54.032311102613264
- type: manhattan_pearson
value: 55.05163232251946
- type: manhattan_spearman
value: 53.81272176804127
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 72.55223880597015
- type: ap
value: 35.01515316721116
- type: f1
value: 66.44086070814382
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.75819999999999
- type: ap
value: 95.51009242092881
- type: f1
value: 96.75713119357414
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 61.971999999999994
- type: f1
value: 60.50745575187704
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.49
- type: f1
value: 51.576550662258434
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.272999999999996
- type: map_at_10
value: 52.782
- type: map_at_100
value: 53.339999999999996
- type: map_at_1000
value: 53.342999999999996
- type: map_at_3
value: 48.4
- type: map_at_5
value: 50.882000000000005
- type: mrr_at_1
value: 36.984
- type: mrr_at_10
value: 53.052
- type: mrr_at_100
value: 53.604
- type: mrr_at_1000
value: 53.607000000000006
- type: mrr_at_3
value: 48.613
- type: mrr_at_5
value: 51.159
- type: ndcg_at_1
value: 36.272999999999996
- type: ndcg_at_10
value: 61.524
- type: ndcg_at_100
value: 63.796
- type: ndcg_at_1000
value: 63.869
- type: ndcg_at_3
value: 52.456
- type: ndcg_at_5
value: 56.964000000000006
- type: precision_at_1
value: 36.272999999999996
- type: precision_at_10
value: 8.926
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 21.407999999999998
- type: precision_at_5
value: 15.049999999999999
- type: recall_at_1
value: 36.272999999999996
- type: recall_at_10
value: 89.25999999999999
- type: recall_at_100
value: 98.933
- type: recall_at_1000
value: 99.502
- type: recall_at_3
value: 64.225
- type: recall_at_5
value: 75.249
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 52.45236368396085
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 46.83781937870832
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.653430349851746
- type: mrr
value: 74.28736314470387
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.18568151905953
- type: cos_sim_spearman
value: 86.47666922475281
- type: euclidean_pearson
value: 87.25416218056225
- type: euclidean_spearman
value: 86.47666922475281
- type: manhattan_pearson
value: 87.04960508086356
- type: manhattan_spearman
value: 86.73992823533615
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 75.7464284612374
- type: cos_sim_spearman
value: 77.71894224189296
- type: euclidean_pearson
value: 77.63454068918787
- type: euclidean_spearman
value: 77.71894224189296
- type: manhattan_pearson
value: 77.58744810404339
- type: manhattan_spearman
value: 77.63293552726073
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.2435064935065
- type: f1
value: 79.44078343737895
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 44.68220155432257
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 40.666150477589284
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 44.23533333311907
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 43.01114481307774
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 86.4349853821696
- type: mrr
value: 88.80150793650795
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 87.56417400982208
- type: mrr
value: 89.85813492063491
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 30.623
- type: map_at_10
value: 40.482
- type: map_at_100
value: 41.997
- type: map_at_1000
value: 42.135
- type: map_at_3
value: 37.754
- type: map_at_5
value: 39.031
- type: mrr_at_1
value: 37.482
- type: mrr_at_10
value: 46.311
- type: mrr_at_100
value: 47.211999999999996
- type: mrr_at_1000
value: 47.27
- type: mrr_at_3
value: 44.157999999999994
- type: mrr_at_5
value: 45.145
- type: ndcg_at_1
value: 37.482
- type: ndcg_at_10
value: 46.142
- type: ndcg_at_100
value: 51.834
- type: ndcg_at_1000
value: 54.164
- type: ndcg_at_3
value: 42.309000000000005
- type: ndcg_at_5
value: 43.485
- type: precision_at_1
value: 37.482
- type: precision_at_10
value: 8.455
- type: precision_at_100
value: 1.3780000000000001
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.172
- type: precision_at_5
value: 13.705
- type: recall_at_1
value: 30.623
- type: recall_at_10
value: 56.77100000000001
- type: recall_at_100
value: 80.034
- type: recall_at_1000
value: 94.62899999999999
- type: recall_at_3
value: 44.663000000000004
- type: recall_at_5
value: 48.692
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 27.941
- type: map_at_10
value: 38.437
- type: map_at_100
value: 39.625
- type: map_at_1000
value: 39.753
- type: map_at_3
value: 35.388999999999996
- type: map_at_5
value: 37.113
- type: mrr_at_1
value: 34.522000000000006
- type: mrr_at_10
value: 43.864999999999995
- type: mrr_at_100
value: 44.533
- type: mrr_at_1000
value: 44.580999999999996
- type: mrr_at_3
value: 41.55
- type: mrr_at_5
value: 42.942
- type: ndcg_at_1
value: 34.522000000000006
- type: ndcg_at_10
value: 44.330000000000005
- type: ndcg_at_100
value: 48.61
- type: ndcg_at_1000
value: 50.712999999999994
- type: ndcg_at_3
value: 39.834
- type: ndcg_at_5
value: 42.016
- type: precision_at_1
value: 34.522000000000006
- type: precision_at_10
value: 8.471
- type: precision_at_100
value: 1.3379999999999999
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 19.363
- type: precision_at_5
value: 13.898
- type: recall_at_1
value: 27.941
- type: recall_at_10
value: 55.336
- type: recall_at_100
value: 73.51100000000001
- type: recall_at_1000
value: 86.636
- type: recall_at_3
value: 42.54
- type: recall_at_5
value: 48.392
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 32.681
- type: map_at_10
value: 45.48
- type: map_at_100
value: 46.542
- type: map_at_1000
value: 46.604
- type: map_at_3
value: 42.076
- type: map_at_5
value: 44.076
- type: mrr_at_1
value: 37.492
- type: mrr_at_10
value: 48.746
- type: mrr_at_100
value: 49.485
- type: mrr_at_1000
value: 49.517
- type: mrr_at_3
value: 45.998
- type: mrr_at_5
value: 47.681000000000004
- type: ndcg_at_1
value: 37.492
- type: ndcg_at_10
value: 51.778999999999996
- type: ndcg_at_100
value: 56.294
- type: ndcg_at_1000
value: 57.58
- type: ndcg_at_3
value: 45.856
- type: ndcg_at_5
value: 48.968
- type: precision_at_1
value: 37.492
- type: precision_at_10
value: 8.620999999999999
- type: precision_at_100
value: 1.189
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 20.773
- type: precision_at_5
value: 14.596
- type: recall_at_1
value: 32.681
- type: recall_at_10
value: 67.196
- type: recall_at_100
value: 87.027
- type: recall_at_1000
value: 96.146
- type: recall_at_3
value: 51.565000000000005
- type: recall_at_5
value: 59.123999999999995
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 22.421
- type: map_at_10
value: 30.127
- type: map_at_100
value: 31.253999999999998
- type: map_at_1000
value: 31.344
- type: map_at_3
value: 27.673
- type: map_at_5
value: 29.182000000000002
- type: mrr_at_1
value: 24.068
- type: mrr_at_10
value: 31.857000000000003
- type: mrr_at_100
value: 32.808
- type: mrr_at_1000
value: 32.881
- type: mrr_at_3
value: 29.397000000000002
- type: mrr_at_5
value: 30.883
- type: ndcg_at_1
value: 24.068
- type: ndcg_at_10
value: 34.642
- type: ndcg_at_100
value: 40.327
- type: ndcg_at_1000
value: 42.55
- type: ndcg_at_3
value: 29.868
- type: ndcg_at_5
value: 32.461
- type: precision_at_1
value: 24.068
- type: precision_at_10
value: 5.390000000000001
- type: precision_at_100
value: 0.873
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 12.692999999999998
- type: precision_at_5
value: 9.107
- type: recall_at_1
value: 22.421
- type: recall_at_10
value: 46.846
- type: recall_at_100
value: 73.409
- type: recall_at_1000
value: 90.06
- type: recall_at_3
value: 34.198
- type: recall_at_5
value: 40.437
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 16.494
- type: map_at_10
value: 24.4
- type: map_at_100
value: 25.718999999999998
- type: map_at_1000
value: 25.840000000000003
- type: map_at_3
value: 21.731
- type: map_at_5
value: 23.247999999999998
- type: mrr_at_1
value: 20.274
- type: mrr_at_10
value: 28.866000000000003
- type: mrr_at_100
value: 29.889
- type: mrr_at_1000
value: 29.957
- type: mrr_at_3
value: 26.284999999999997
- type: mrr_at_5
value: 27.79
- type: ndcg_at_1
value: 20.274
- type: ndcg_at_10
value: 29.666999999999998
- type: ndcg_at_100
value: 36.095
- type: ndcg_at_1000
value: 38.87
- type: ndcg_at_3
value: 24.672
- type: ndcg_at_5
value: 27.106
- type: precision_at_1
value: 20.274
- type: precision_at_10
value: 5.5969999999999995
- type: precision_at_100
value: 1.04
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_3
value: 12.023
- type: precision_at_5
value: 8.98
- type: recall_at_1
value: 16.494
- type: recall_at_10
value: 41.400999999999996
- type: recall_at_100
value: 69.811
- type: recall_at_1000
value: 89.422
- type: recall_at_3
value: 27.834999999999997
- type: recall_at_5
value: 33.774
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 26.150000000000002
- type: map_at_10
value: 36.012
- type: map_at_100
value: 37.377
- type: map_at_1000
value: 37.497
- type: map_at_3
value: 32.712
- type: map_at_5
value: 34.475
- type: mrr_at_1
value: 32.05
- type: mrr_at_10
value: 41.556
- type: mrr_at_100
value: 42.451
- type: mrr_at_1000
value: 42.498000000000005
- type: mrr_at_3
value: 38.659
- type: mrr_at_5
value: 40.314
- type: ndcg_at_1
value: 32.05
- type: ndcg_at_10
value: 42.132
- type: ndcg_at_100
value: 48.028999999999996
- type: ndcg_at_1000
value: 50.229
- type: ndcg_at_3
value: 36.622
- type: ndcg_at_5
value: 39.062000000000005
- type: precision_at_1
value: 32.05
- type: precision_at_10
value: 7.767
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 17.355999999999998
- type: precision_at_5
value: 12.474
- type: recall_at_1
value: 26.150000000000002
- type: recall_at_10
value: 55.205000000000005
- type: recall_at_100
value: 80.2
- type: recall_at_1000
value: 94.524
- type: recall_at_3
value: 39.322
- type: recall_at_5
value: 45.761
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 23.741
- type: map_at_10
value: 33.51
- type: map_at_100
value: 34.882999999999996
- type: map_at_1000
value: 34.995
- type: map_at_3
value: 30.514000000000003
- type: map_at_5
value: 32.085
- type: mrr_at_1
value: 28.653000000000002
- type: mrr_at_10
value: 38.059
- type: mrr_at_100
value: 39.050000000000004
- type: mrr_at_1000
value: 39.107
- type: mrr_at_3
value: 35.445
- type: mrr_at_5
value: 36.849
- type: ndcg_at_1
value: 28.653000000000002
- type: ndcg_at_10
value: 39.186
- type: ndcg_at_100
value: 45.301
- type: ndcg_at_1000
value: 47.547
- type: ndcg_at_3
value: 34.103
- type: ndcg_at_5
value: 36.239
- type: precision_at_1
value: 28.653000000000002
- type: precision_at_10
value: 7.295
- type: precision_at_100
value: 1.2189999999999999
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 16.438
- type: precision_at_5
value: 11.804
- type: recall_at_1
value: 23.741
- type: recall_at_10
value: 51.675000000000004
- type: recall_at_100
value: 78.13799999999999
- type: recall_at_1000
value: 93.12700000000001
- type: recall_at_3
value: 37.033
- type: recall_at_5
value: 42.793
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 25.281666666666663
- type: map_at_10
value: 34.080666666666666
- type: map_at_100
value: 35.278749999999995
- type: map_at_1000
value: 35.40183333333333
- type: map_at_3
value: 31.45316666666667
- type: map_at_5
value: 32.92716666666667
- type: mrr_at_1
value: 29.78783333333333
- type: mrr_at_10
value: 38.077333333333335
- type: mrr_at_100
value: 38.936499999999995
- type: mrr_at_1000
value: 39.000249999999994
- type: mrr_at_3
value: 35.7735
- type: mrr_at_5
value: 37.07683333333334
- type: ndcg_at_1
value: 29.78783333333333
- type: ndcg_at_10
value: 39.18300000000001
- type: ndcg_at_100
value: 44.444750000000006
- type: ndcg_at_1000
value: 46.90316666666667
- type: ndcg_at_3
value: 34.69308333333333
- type: ndcg_at_5
value: 36.80316666666666
- type: precision_at_1
value: 29.78783333333333
- type: precision_at_10
value: 6.820749999999999
- type: precision_at_100
value: 1.1224166666666666
- type: precision_at_1000
value: 0.1525
- type: precision_at_3
value: 15.936333333333335
- type: precision_at_5
value: 11.282333333333334
- type: recall_at_1
value: 25.281666666666663
- type: recall_at_10
value: 50.282
- type: recall_at_100
value: 73.54558333333334
- type: recall_at_1000
value: 90.64241666666666
- type: recall_at_3
value: 37.800999999999995
- type: recall_at_5
value: 43.223000000000006
- type: map_at_1
value: 19.07
- type: map_at_10
value: 26.608999999999998
- type: map_at_100
value: 27.625
- type: map_at_1000
value: 27.743000000000002
- type: map_at_3
value: 24.532999999999998
- type: map_at_5
value: 25.671
- type: mrr_at_1
value: 20.518
- type: mrr_at_10
value: 28.541
- type: mrr_at_100
value: 29.453000000000003
- type: mrr_at_1000
value: 29.536
- type: mrr_at_3
value: 26.71
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 20.518
- type: ndcg_at_10
value: 30.855
- type: ndcg_at_100
value: 35.973
- type: ndcg_at_1000
value: 38.827
- type: ndcg_at_3
value: 26.868
- type: ndcg_at_5
value: 28.74
- type: precision_at_1
value: 20.518
- type: precision_at_10
value: 4.843
- type: precision_at_100
value: 0.799
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.645
- type: precision_at_5
value: 8.133
- type: recall_at_1
value: 19.07
- type: recall_at_10
value: 41.925000000000004
- type: recall_at_100
value: 65.68
- type: recall_at_1000
value: 86.713
- type: recall_at_3
value: 31.251
- type: recall_at_5
value: 35.653
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 23.452
- type: map_at_10
value: 30.231
- type: map_at_100
value: 31.227
- type: map_at_1000
value: 31.338
- type: map_at_3
value: 28.083000000000002
- type: map_at_5
value: 29.125
- type: mrr_at_1
value: 25.613000000000003
- type: mrr_at_10
value: 32.62
- type: mrr_at_100
value: 33.469
- type: mrr_at_1000
value: 33.554
- type: mrr_at_3
value: 30.368000000000002
- type: mrr_at_5
value: 31.502999999999997
- type: ndcg_at_1
value: 25.613000000000003
- type: ndcg_at_10
value: 34.441
- type: ndcg_at_100
value: 39.253
- type: ndcg_at_1000
value: 42.105
- type: ndcg_at_3
value: 30.183
- type: ndcg_at_5
value: 31.917
- type: precision_at_1
value: 25.613000000000003
- type: precision_at_10
value: 5.367999999999999
- type: precision_at_100
value: 0.848
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.73
- type: precision_at_5
value: 8.773
- type: recall_at_1
value: 23.452
- type: recall_at_10
value: 45.021
- type: recall_at_100
value: 66.563
- type: recall_at_1000
value: 87.713
- type: recall_at_3
value: 33.433
- type: recall_at_5
value: 37.637
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 16.11
- type: map_at_10
value: 22.832
- type: map_at_100
value: 23.829
- type: map_at_1000
value: 23.959
- type: map_at_3
value: 20.66
- type: map_at_5
value: 21.851000000000003
- type: mrr_at_1
value: 19.408
- type: mrr_at_10
value: 26.354
- type: mrr_at_100
value: 27.237000000000002
- type: mrr_at_1000
value: 27.32
- type: mrr_at_3
value: 24.243000000000002
- type: mrr_at_5
value: 25.430000000000003
- type: ndcg_at_1
value: 19.408
- type: ndcg_at_10
value: 27.239
- type: ndcg_at_100
value: 32.286
- type: ndcg_at_1000
value: 35.498000000000005
- type: ndcg_at_3
value: 23.244
- type: ndcg_at_5
value: 25.080999999999996
- type: precision_at_1
value: 19.408
- type: precision_at_10
value: 4.917
- type: precision_at_100
value: 0.874
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 10.863
- type: precision_at_5
value: 7.887
- type: recall_at_1
value: 16.11
- type: recall_at_10
value: 37.075
- type: recall_at_100
value: 60.251999999999995
- type: recall_at_1000
value: 83.38600000000001
- type: recall_at_3
value: 25.901999999999997
- type: recall_at_5
value: 30.612000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 25.941
- type: map_at_10
value: 33.711999999999996
- type: map_at_100
value: 34.926
- type: map_at_1000
value: 35.05
- type: map_at_3
value: 31.075000000000003
- type: map_at_5
value: 32.611000000000004
- type: mrr_at_1
value: 30.784
- type: mrr_at_10
value: 38.079
- type: mrr_at_100
value: 39.018
- type: mrr_at_1000
value: 39.09
- type: mrr_at_3
value: 35.603
- type: mrr_at_5
value: 36.988
- type: ndcg_at_1
value: 30.784
- type: ndcg_at_10
value: 38.586
- type: ndcg_at_100
value: 44.205
- type: ndcg_at_1000
value: 46.916000000000004
- type: ndcg_at_3
value: 33.899
- type: ndcg_at_5
value: 36.11
- type: precision_at_1
value: 30.784
- type: precision_at_10
value: 6.409
- type: precision_at_100
value: 1.034
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 15.112
- type: precision_at_5
value: 10.728
- type: recall_at_1
value: 25.941
- type: recall_at_10
value: 49.242999999999995
- type: recall_at_100
value: 73.85000000000001
- type: recall_at_1000
value: 92.782
- type: recall_at_3
value: 36.204
- type: recall_at_5
value: 41.908
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 24.401999999999997
- type: map_at_10
value: 33.195
- type: map_at_100
value: 34.699999999999996
- type: map_at_1000
value: 34.946
- type: map_at_3
value: 30.570999999999998
- type: map_at_5
value: 32.0
- type: mrr_at_1
value: 28.656
- type: mrr_at_10
value: 37.039
- type: mrr_at_100
value: 38.049
- type: mrr_at_1000
value: 38.108
- type: mrr_at_3
value: 34.717
- type: mrr_at_5
value: 36.07
- type: ndcg_at_1
value: 28.656
- type: ndcg_at_10
value: 38.557
- type: ndcg_at_100
value: 44.511
- type: ndcg_at_1000
value: 47.346
- type: ndcg_at_3
value: 34.235
- type: ndcg_at_5
value: 36.260999999999996
- type: precision_at_1
value: 28.656
- type: precision_at_10
value: 7.312
- type: precision_at_100
value: 1.451
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 15.942
- type: precision_at_5
value: 11.66
- type: recall_at_1
value: 24.401999999999997
- type: recall_at_10
value: 48.791000000000004
- type: recall_at_100
value: 76.211
- type: recall_at_1000
value: 93.92
- type: recall_at_3
value: 36.975
- type: recall_at_5
value: 42.01
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 18.762
- type: map_at_10
value: 32.412
- type: map_at_100
value: 34.506
- type: map_at_1000
value: 34.678
- type: map_at_3
value: 27.594
- type: map_at_5
value: 30.128
- type: mrr_at_1
value: 42.345
- type: mrr_at_10
value: 54.443
- type: mrr_at_100
value: 55.05799999999999
- type: mrr_at_1000
value: 55.076
- type: mrr_at_3
value: 51.553000000000004
- type: mrr_at_5
value: 53.269
- type: ndcg_at_1
value: 42.345
- type: ndcg_at_10
value: 42.304
- type: ndcg_at_100
value: 49.425000000000004
- type: ndcg_at_1000
value: 52.123
- type: ndcg_at_3
value: 36.271
- type: ndcg_at_5
value: 38.216
- type: precision_at_1
value: 42.345
- type: precision_at_10
value: 12.808
- type: precision_at_100
value: 2.062
- type: precision_at_1000
value: 0.258
- type: precision_at_3
value: 26.840000000000003
- type: precision_at_5
value: 20.052
- type: recall_at_1
value: 18.762
- type: recall_at_10
value: 47.976
- type: recall_at_100
value: 71.86
- type: recall_at_1000
value: 86.61999999999999
- type: recall_at_3
value: 32.708999999999996
- type: recall_at_5
value: 39.151
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 24.871
- type: map_at_10
value: 37.208999999999996
- type: map_at_100
value: 38.993
- type: map_at_1000
value: 39.122
- type: map_at_3
value: 33.2
- type: map_at_5
value: 35.33
- type: mrr_at_1
value: 37.884
- type: mrr_at_10
value: 46.189
- type: mrr_at_100
value: 47.147
- type: mrr_at_1000
value: 47.195
- type: mrr_at_3
value: 43.728
- type: mrr_at_5
value: 44.994
- type: ndcg_at_1
value: 37.884
- type: ndcg_at_10
value: 43.878
- type: ndcg_at_100
value: 51.002
- type: ndcg_at_1000
value: 53.161
- type: ndcg_at_3
value: 38.729
- type: ndcg_at_5
value: 40.628
- type: precision_at_1
value: 37.884
- type: precision_at_10
value: 9.75
- type: precision_at_100
value: 1.558
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 21.964
- type: precision_at_5
value: 15.719
- type: recall_at_1
value: 24.871
- type: recall_at_10
value: 54.615
- type: recall_at_100
value: 84.276
- type: recall_at_1000
value: 98.578
- type: recall_at_3
value: 38.936
- type: recall_at_5
value: 45.061
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 76.12748045700542
- type: cos_sim_ap
value: 84.47948419710998
- type: cos_sim_f1
value: 77.88108108108108
- type: cos_sim_precision
value: 72.43112809169516
- type: cos_sim_recall
value: 84.21790974982464
- type: dot_accuracy
value: 76.12748045700542
- type: dot_ap
value: 84.4933237839786
- type: dot_f1
value: 77.88108108108108
- type: dot_precision
value: 72.43112809169516
- type: dot_recall
value: 84.21790974982464
- type: euclidean_accuracy
value: 76.12748045700542
- type: euclidean_ap
value: 84.47947997540409
- type: euclidean_f1
value: 77.88108108108108
- type: euclidean_precision
value: 72.43112809169516
- type: euclidean_recall
value: 84.21790974982464
- type: manhattan_accuracy
value: 75.40589296452195
- type: manhattan_ap
value: 83.74383956930585
- type: manhattan_f1
value: 77.0983342289092
- type: manhattan_precision
value: 71.34049323786795
- type: manhattan_recall
value: 83.86719663315408
- type: max_accuracy
value: 76.12748045700542
- type: max_ap
value: 84.4933237839786
- type: max_f1
value: 77.88108108108108
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 66.781
- type: map_at_10
value: 74.539
- type: map_at_100
value: 74.914
- type: map_at_1000
value: 74.921
- type: map_at_3
value: 72.734
- type: map_at_5
value: 73.788
- type: mrr_at_1
value: 66.913
- type: mrr_at_10
value: 74.543
- type: mrr_at_100
value: 74.914
- type: mrr_at_1000
value: 74.921
- type: mrr_at_3
value: 72.831
- type: mrr_at_5
value: 73.76899999999999
- type: ndcg_at_1
value: 67.018
- type: ndcg_at_10
value: 78.34299999999999
- type: ndcg_at_100
value: 80.138
- type: ndcg_at_1000
value: 80.322
- type: ndcg_at_3
value: 74.667
- type: ndcg_at_5
value: 76.518
- type: precision_at_1
value: 67.018
- type: precision_at_10
value: 9.115
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 26.906000000000002
- type: precision_at_5
value: 17.092
- type: recall_at_1
value: 66.781
- type: recall_at_10
value: 90.253
- type: recall_at_100
value: 98.52499999999999
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 80.05799999999999
- type: recall_at_5
value: 84.615
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.685
- type: map_at_10
value: 21.65
- type: map_at_100
value: 30.952
- type: map_at_1000
value: 33.049
- type: map_at_3
value: 14.953
- type: map_at_5
value: 17.592
- type: mrr_at_1
value: 72.0
- type: mrr_at_10
value: 78.054
- type: mrr_at_100
value: 78.41900000000001
- type: mrr_at_1000
value: 78.425
- type: mrr_at_3
value: 76.5
- type: mrr_at_5
value: 77.28699999999999
- type: ndcg_at_1
value: 61.25000000000001
- type: ndcg_at_10
value: 46.306000000000004
- type: ndcg_at_100
value: 50.867
- type: ndcg_at_1000
value: 58.533
- type: ndcg_at_3
value: 50.857
- type: ndcg_at_5
value: 48.283
- type: precision_at_1
value: 72.0
- type: precision_at_10
value: 37.3
- type: precision_at_100
value: 11.95
- type: precision_at_1000
value: 2.528
- type: precision_at_3
value: 53.583000000000006
- type: precision_at_5
value: 46.6
- type: recall_at_1
value: 9.685
- type: recall_at_10
value: 27.474999999999998
- type: recall_at_100
value: 56.825
- type: recall_at_1000
value: 81.792
- type: recall_at_3
value: 15.939
- type: recall_at_5
value: 19.853
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 24.528
- type: map_at_10
value: 76.304
- type: map_at_100
value: 79.327
- type: map_at_1000
value: 79.373
- type: map_at_3
value: 52.035
- type: map_at_5
value: 66.074
- type: mrr_at_1
value: 86.05000000000001
- type: mrr_at_10
value: 90.74
- type: mrr_at_100
value: 90.809
- type: mrr_at_1000
value: 90.81099999999999
- type: mrr_at_3
value: 90.30799999999999
- type: mrr_at_5
value: 90.601
- type: ndcg_at_1
value: 86.05000000000001
- type: ndcg_at_10
value: 84.518
- type: ndcg_at_100
value: 87.779
- type: ndcg_at_1000
value: 88.184
- type: ndcg_at_3
value: 82.339
- type: ndcg_at_5
value: 81.613
- type: precision_at_1
value: 86.05000000000001
- type: precision_at_10
value: 40.945
- type: precision_at_100
value: 4.787
- type: precision_at_1000
value: 0.48900000000000005
- type: precision_at_3
value: 74.117
- type: precision_at_5
value: 62.86000000000001
- type: recall_at_1
value: 24.528
- type: recall_at_10
value: 86.78
- type: recall_at_100
value: 97.198
- type: recall_at_1000
value: 99.227
- type: recall_at_3
value: 54.94799999999999
- type: recall_at_5
value: 72.053
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 52.1
- type: map_at_10
value: 62.502
- type: map_at_100
value: 63.026
- type: map_at_1000
value: 63.04
- type: map_at_3
value: 59.782999999999994
- type: map_at_5
value: 61.443000000000005
- type: mrr_at_1
value: 52.1
- type: mrr_at_10
value: 62.502
- type: mrr_at_100
value: 63.026
- type: mrr_at_1000
value: 63.04
- type: mrr_at_3
value: 59.782999999999994
- type: mrr_at_5
value: 61.443000000000005
- type: ndcg_at_1
value: 52.1
- type: ndcg_at_10
value: 67.75999999999999
- type: ndcg_at_100
value: 70.072
- type: ndcg_at_1000
value: 70.441
- type: ndcg_at_3
value: 62.28
- type: ndcg_at_5
value: 65.25800000000001
- type: precision_at_1
value: 52.1
- type: precision_at_10
value: 8.43
- type: precision_at_100
value: 0.946
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 23.166999999999998
- type: precision_at_5
value: 15.340000000000002
- type: recall_at_1
value: 52.1
- type: recall_at_10
value: 84.3
- type: recall_at_100
value: 94.6
- type: recall_at_1000
value: 97.5
- type: recall_at_3
value: 69.5
- type: recall_at_5
value: 76.7
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 62.805000000000014
- type: f1
value: 56.401757250989384
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 83.734
- type: map_at_10
value: 90.089
- type: map_at_100
value: 90.274
- type: map_at_1000
value: 90.286
- type: map_at_3
value: 89.281
- type: map_at_5
value: 89.774
- type: mrr_at_1
value: 90.039
- type: mrr_at_10
value: 94.218
- type: mrr_at_100
value: 94.24
- type: mrr_at_1000
value: 94.24
- type: mrr_at_3
value: 93.979
- type: mrr_at_5
value: 94.137
- type: ndcg_at_1
value: 90.039
- type: ndcg_at_10
value: 92.597
- type: ndcg_at_100
value: 93.147
- type: ndcg_at_1000
value: 93.325
- type: ndcg_at_3
value: 91.64999999999999
- type: ndcg_at_5
value: 92.137
- type: precision_at_1
value: 90.039
- type: precision_at_10
value: 10.809000000000001
- type: precision_at_100
value: 1.133
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 34.338
- type: precision_at_5
value: 21.089
- type: recall_at_1
value: 83.734
- type: recall_at_10
value: 96.161
- type: recall_at_100
value: 98.137
- type: recall_at_1000
value: 99.182
- type: recall_at_3
value: 93.551
- type: recall_at_5
value: 94.878
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 24.529999999999998
- type: map_at_10
value: 37.229
- type: map_at_100
value: 39.333
- type: map_at_1000
value: 39.491
- type: map_at_3
value: 32.177
- type: map_at_5
value: 35.077999999999996
- type: mrr_at_1
value: 45.678999999999995
- type: mrr_at_10
value: 53.952
- type: mrr_at_100
value: 54.727000000000004
- type: mrr_at_1000
value: 54.761
- type: mrr_at_3
value: 51.568999999999996
- type: mrr_at_5
value: 52.973000000000006
- type: ndcg_at_1
value: 45.678999999999995
- type: ndcg_at_10
value: 45.297
- type: ndcg_at_100
value: 52.516
- type: ndcg_at_1000
value: 55.16
- type: ndcg_at_3
value: 40.569
- type: ndcg_at_5
value: 42.49
- type: precision_at_1
value: 45.678999999999995
- type: precision_at_10
value: 12.269
- type: precision_at_100
value: 1.9709999999999999
- type: precision_at_1000
value: 0.244
- type: precision_at_3
value: 25.72
- type: precision_at_5
value: 19.66
- type: recall_at_1
value: 24.529999999999998
- type: recall_at_10
value: 51.983999999999995
- type: recall_at_100
value: 78.217
- type: recall_at_1000
value: 94.104
- type: recall_at_3
value: 36.449999999999996
- type: recall_at_5
value: 43.336999999999996
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 41.519
- type: map_at_10
value: 64.705
- type: map_at_100
value: 65.554
- type: map_at_1000
value: 65.613
- type: map_at_3
value: 61.478
- type: map_at_5
value: 63.55800000000001
- type: mrr_at_1
value: 83.038
- type: mrr_at_10
value: 87.82900000000001
- type: mrr_at_100
value: 87.96000000000001
- type: mrr_at_1000
value: 87.96300000000001
- type: mrr_at_3
value: 87.047
- type: mrr_at_5
value: 87.546
- type: ndcg_at_1
value: 83.038
- type: ndcg_at_10
value: 72.928
- type: ndcg_at_100
value: 75.778
- type: ndcg_at_1000
value: 76.866
- type: ndcg_at_3
value: 68.46600000000001
- type: ndcg_at_5
value: 71.036
- type: precision_at_1
value: 83.038
- type: precision_at_10
value: 15.040999999999999
- type: precision_at_100
value: 1.7260000000000002
- type: precision_at_1000
value: 0.187
- type: precision_at_3
value: 43.597
- type: precision_at_5
value: 28.188999999999997
- type: recall_at_1
value: 41.519
- type: recall_at_10
value: 75.20599999999999
- type: recall_at_100
value: 86.3
- type: recall_at_1000
value: 93.437
- type: recall_at_3
value: 65.39500000000001
- type: recall_at_5
value: 70.473
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 52.04309349749903
- type: f1
value: 39.91893257315586
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.0428
- type: ap
value: 94.48278082595033
- type: f1
value: 96.0409595432081
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 85.60975609756099
- type: ap
value: 54.30148799475452
- type: f1
value: 80.55899583002706
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 66.44418108776416
- type: cos_sim_spearman
value: 72.79912770347306
- type: euclidean_pearson
value: 71.11194894579198
- type: euclidean_spearman
value: 72.79912104971427
- type: manhattan_pearson
value: 70.96800061808604
- type: manhattan_spearman
value: 72.63525186107175
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: 8e0c766dbe9e16e1d221116a3f36795fbade07f6
metrics:
- type: map
value: 27.9616280919871
- type: mrr
value: 26.544047619047618
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 68.32300000000001
- type: map_at_10
value: 77.187
- type: map_at_100
value: 77.496
- type: map_at_1000
value: 77.503
- type: map_at_3
value: 75.405
- type: map_at_5
value: 76.539
- type: mrr_at_1
value: 70.616
- type: mrr_at_10
value: 77.703
- type: mrr_at_100
value: 77.97699999999999
- type: mrr_at_1000
value: 77.984
- type: mrr_at_3
value: 76.139
- type: mrr_at_5
value: 77.125
- type: ndcg_at_1
value: 70.616
- type: ndcg_at_10
value: 80.741
- type: ndcg_at_100
value: 82.123
- type: ndcg_at_1000
value: 82.32300000000001
- type: ndcg_at_3
value: 77.35600000000001
- type: ndcg_at_5
value: 79.274
- type: precision_at_1
value: 70.616
- type: precision_at_10
value: 9.696
- type: precision_at_100
value: 1.038
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.026000000000003
- type: precision_at_5
value: 18.433
- type: recall_at_1
value: 68.32300000000001
- type: recall_at_10
value: 91.186
- type: recall_at_100
value: 97.439
- type: recall_at_1000
value: 99.004
- type: recall_at_3
value: 82.218
- type: recall_at_5
value: 86.797
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 21.496000000000002
- type: map_at_10
value: 33.82
- type: map_at_100
value: 35.013
- type: map_at_1000
value: 35.063
- type: map_at_3
value: 29.910999999999998
- type: map_at_5
value: 32.086
- type: mrr_at_1
value: 22.092
- type: mrr_at_10
value: 34.404
- type: mrr_at_100
value: 35.534
- type: mrr_at_1000
value: 35.577999999999996
- type: mrr_at_3
value: 30.544
- type: mrr_at_5
value: 32.711
- type: ndcg_at_1
value: 22.092
- type: ndcg_at_10
value: 40.877
- type: ndcg_at_100
value: 46.619
- type: ndcg_at_1000
value: 47.823
- type: ndcg_at_3
value: 32.861000000000004
- type: ndcg_at_5
value: 36.769
- type: precision_at_1
value: 22.092
- type: precision_at_10
value: 6.54
- type: precision_at_100
value: 0.943
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 14.069
- type: precision_at_5
value: 10.424
- type: recall_at_1
value: 21.496000000000002
- type: recall_at_10
value: 62.67
- type: recall_at_100
value: 89.24499999999999
- type: recall_at_1000
value: 98.312
- type: recall_at_3
value: 40.796
- type: recall_at_5
value: 50.21600000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.74555403556772
- type: f1
value: 95.61381879323093
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 85.82763337893297
- type: f1
value: 63.17139719465236
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 78.51714862138535
- type: f1
value: 76.3995118440293
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.78143913920646
- type: f1
value: 72.6141122227626
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.03698722259583
- type: f1
value: 79.36511484240766
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.98722259583053
- type: f1
value: 76.5974920207624
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 51.800000000000004
- type: map_at_10
value: 57.938
- type: map_at_100
value: 58.494
- type: map_at_1000
value: 58.541
- type: map_at_3
value: 56.617
- type: map_at_5
value: 57.302
- type: mrr_at_1
value: 51.800000000000004
- type: mrr_at_10
value: 57.938
- type: mrr_at_100
value: 58.494
- type: mrr_at_1000
value: 58.541
- type: mrr_at_3
value: 56.617
- type: mrr_at_5
value: 57.302
- type: ndcg_at_1
value: 51.800000000000004
- type: ndcg_at_10
value: 60.891
- type: ndcg_at_100
value: 63.897000000000006
- type: ndcg_at_1000
value: 65.231
- type: ndcg_at_3
value: 58.108000000000004
- type: ndcg_at_5
value: 59.343
- type: precision_at_1
value: 51.800000000000004
- type: precision_at_10
value: 7.02
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 20.8
- type: precision_at_5
value: 13.08
- type: recall_at_1
value: 51.800000000000004
- type: recall_at_10
value: 70.19999999999999
- type: recall_at_100
value: 85.0
- type: recall_at_1000
value: 95.7
- type: recall_at_3
value: 62.4
- type: recall_at_5
value: 65.4
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 38.68901889835701
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 38.0740589898848
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.41312482460189
- type: mrr
value: 34.713530863302495
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 80.39333333333335
- type: f1
value: 80.42683132366277
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.232
- type: map_at_10
value: 13.442000000000002
- type: map_at_100
value: 17.443
- type: map_at_1000
value: 19.1
- type: map_at_3
value: 9.794
- type: map_at_5
value: 11.375
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 58.628
- type: mrr_at_100
value: 59.077
- type: mrr_at_1000
value: 59.119
- type: mrr_at_3
value: 56.914
- type: mrr_at_5
value: 57.921
- type: ndcg_at_1
value: 48.762
- type: ndcg_at_10
value: 37.203
- type: ndcg_at_100
value: 34.556
- type: ndcg_at_1000
value: 43.601
- type: ndcg_at_3
value: 43.004
- type: ndcg_at_5
value: 40.181
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 27.276
- type: precision_at_100
value: 8.981
- type: precision_at_1000
value: 2.228
- type: precision_at_3
value: 39.628
- type: precision_at_5
value: 33.808
- type: recall_at_1
value: 6.232
- type: recall_at_10
value: 18.137
- type: recall_at_100
value: 36.101
- type: recall_at_1000
value: 68.733
- type: recall_at_3
value: 10.978
- type: recall_at_5
value: 13.718
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 35.545
- type: map_at_10
value: 52.083
- type: map_at_100
value: 52.954
- type: map_at_1000
value: 52.96999999999999
- type: map_at_3
value: 47.508
- type: map_at_5
value: 50.265
- type: mrr_at_1
value: 40.122
- type: mrr_at_10
value: 54.567
- type: mrr_at_100
value: 55.19199999999999
- type: mrr_at_1000
value: 55.204
- type: mrr_at_3
value: 51.043000000000006
- type: mrr_at_5
value: 53.233
- type: ndcg_at_1
value: 40.122
- type: ndcg_at_10
value: 60.012
- type: ndcg_at_100
value: 63.562
- type: ndcg_at_1000
value: 63.94
- type: ndcg_at_3
value: 51.681
- type: ndcg_at_5
value: 56.154
- type: precision_at_1
value: 40.122
- type: precision_at_10
value: 9.774
- type: precision_at_100
value: 1.176
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 23.426
- type: precision_at_5
value: 16.686
- type: recall_at_1
value: 35.545
- type: recall_at_10
value: 81.557
- type: recall_at_100
value: 96.729
- type: recall_at_1000
value: 99.541
- type: recall_at_3
value: 60.185
- type: recall_at_5
value: 70.411
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 70.7634001082837
- type: cos_sim_ap
value: 74.97527385556558
- type: cos_sim_f1
value: 72.77277277277277
- type: cos_sim_precision
value: 69.17221693625119
- type: cos_sim_recall
value: 76.76874340021119
- type: dot_accuracy
value: 70.7634001082837
- type: dot_ap
value: 74.97527385556558
- type: dot_f1
value: 72.77277277277277
- type: dot_precision
value: 69.17221693625119
- type: dot_recall
value: 76.76874340021119
- type: euclidean_accuracy
value: 70.7634001082837
- type: euclidean_ap
value: 74.97527385556558
- type: euclidean_f1
value: 72.77277277277277
- type: euclidean_precision
value: 69.17221693625119
- type: euclidean_recall
value: 76.76874340021119
- type: manhattan_accuracy
value: 69.89713048186248
- type: manhattan_ap
value: 74.25943370061067
- type: manhattan_f1
value: 72.17268887846082
- type: manhattan_precision
value: 64.94932432432432
- type: manhattan_recall
value: 81.20380147835269
- type: max_accuracy
value: 70.7634001082837
- type: max_ap
value: 74.97527385556558
- type: max_f1
value: 72.77277277277277
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 92.92000000000002
- type: ap
value: 91.98475625106201
- type: f1
value: 92.91841470541901
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 41.23764415526825
- type: cos_sim_spearman
value: 46.872669471694664
- type: euclidean_pearson
value: 46.434144530918566
- type: euclidean_spearman
value: 46.872669471694664
- type: manhattan_pearson
value: 46.39678126910133
- type: manhattan_spearman
value: 46.55877754642116
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 28.77503601696299
- type: cos_sim_spearman
value: 31.818095557325606
- type: euclidean_pearson
value: 29.811479220397125
- type: euclidean_spearman
value: 31.817046821577673
- type: manhattan_pearson
value: 29.901628633314214
- type: manhattan_spearman
value: 31.991472038092084
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 68.908
- type: map_at_10
value: 83.19
- type: map_at_100
value: 83.842
- type: map_at_1000
value: 83.858
- type: map_at_3
value: 80.167
- type: map_at_5
value: 82.053
- type: mrr_at_1
value: 79.46
- type: mrr_at_10
value: 86.256
- type: mrr_at_100
value: 86.37
- type: mrr_at_1000
value: 86.371
- type: mrr_at_3
value: 85.177
- type: mrr_at_5
value: 85.908
- type: ndcg_at_1
value: 79.5
- type: ndcg_at_10
value: 87.244
- type: ndcg_at_100
value: 88.532
- type: ndcg_at_1000
value: 88.626
- type: ndcg_at_3
value: 84.161
- type: ndcg_at_5
value: 85.835
- type: precision_at_1
value: 79.5
- type: precision_at_10
value: 13.339
- type: precision_at_100
value: 1.53
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 36.97
- type: precision_at_5
value: 24.384
- type: recall_at_1
value: 68.908
- type: recall_at_10
value: 95.179
- type: recall_at_100
value: 99.579
- type: recall_at_1000
value: 99.964
- type: recall_at_3
value: 86.424
- type: recall_at_5
value: 91.065
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 65.17897847862794
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 66.22194961632586
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.668
- type: map_at_10
value: 13.921
- type: map_at_100
value: 16.391
- type: map_at_1000
value: 16.749
- type: map_at_3
value: 10.001999999999999
- type: map_at_5
value: 11.974
- type: mrr_at_1
value: 27.800000000000004
- type: mrr_at_10
value: 39.290000000000006
- type: mrr_at_100
value: 40.313
- type: mrr_at_1000
value: 40.355999999999995
- type: mrr_at_3
value: 35.667
- type: mrr_at_5
value: 37.742
- type: ndcg_at_1
value: 27.800000000000004
- type: ndcg_at_10
value: 23.172
- type: ndcg_at_100
value: 32.307
- type: ndcg_at_1000
value: 38.048
- type: ndcg_at_3
value: 22.043
- type: ndcg_at_5
value: 19.287000000000003
- type: precision_at_1
value: 27.800000000000004
- type: precision_at_10
value: 11.95
- type: precision_at_100
value: 2.5260000000000002
- type: precision_at_1000
value: 0.38999999999999996
- type: precision_at_3
value: 20.433
- type: precision_at_5
value: 16.84
- type: recall_at_1
value: 5.668
- type: recall_at_10
value: 24.22
- type: recall_at_100
value: 51.217
- type: recall_at_1000
value: 79.10000000000001
- type: recall_at_3
value: 12.443
- type: recall_at_5
value: 17.068
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.83535239748218
- type: cos_sim_spearman
value: 73.98553311584509
- type: euclidean_pearson
value: 79.57336200069007
- type: euclidean_spearman
value: 73.98553926018461
- type: manhattan_pearson
value: 79.02277757114132
- type: manhattan_spearman
value: 73.52350678760683
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 81.99055838690317
- type: cos_sim_spearman
value: 72.05290668592296
- type: euclidean_pearson
value: 81.7130610313565
- type: euclidean_spearman
value: 72.0529066787229
- type: manhattan_pearson
value: 82.09213883730894
- type: manhattan_spearman
value: 72.5171577483134
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.4685161191763
- type: cos_sim_spearman
value: 84.4847436140129
- type: euclidean_pearson
value: 84.05016757016948
- type: euclidean_spearman
value: 84.48474353891532
- type: manhattan_pearson
value: 83.83064062713048
- type: manhattan_spearman
value: 84.30431591842805
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.00171021092486
- type: cos_sim_spearman
value: 77.91329577609622
- type: euclidean_pearson
value: 81.49758593915315
- type: euclidean_spearman
value: 77.91329577609622
- type: manhattan_pearson
value: 81.23255996803785
- type: manhattan_spearman
value: 77.80027024941825
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.62608607472492
- type: cos_sim_spearman
value: 87.62293916855751
- type: euclidean_pearson
value: 87.04313886714989
- type: euclidean_spearman
value: 87.62293907119869
- type: manhattan_pearson
value: 86.97266321040769
- type: manhattan_spearman
value: 87.61807042381702
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 80.8012095789289
- type: cos_sim_spearman
value: 81.91868918081325
- type: euclidean_pearson
value: 81.2267973811213
- type: euclidean_spearman
value: 81.91868918081325
- type: manhattan_pearson
value: 81.0173457901168
- type: manhattan_spearman
value: 81.79743115887055
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.39698537303725
- type: cos_sim_spearman
value: 88.78668529808967
- type: euclidean_pearson
value: 88.78863351718252
- type: euclidean_spearman
value: 88.78668529808967
- type: manhattan_pearson
value: 88.41678215762478
- type: manhattan_spearman
value: 88.3827998418763
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 68.49024974161408
- type: cos_sim_spearman
value: 69.19917146180619
- type: euclidean_pearson
value: 70.48882819806336
- type: euclidean_spearman
value: 69.19917146180619
- type: manhattan_pearson
value: 70.86827961779932
- type: manhattan_spearman
value: 69.38456983992613
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.41628669863584
- type: cos_sim_spearman
value: 67.87238206703478
- type: euclidean_pearson
value: 67.67834985311778
- type: euclidean_spearman
value: 67.87238206703478
- type: manhattan_pearson
value: 68.23423896742973
- type: manhattan_spearman
value: 68.27069260687092
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 77.31628954400037
- type: cos_sim_spearman
value: 76.83296022489624
- type: euclidean_pearson
value: 76.69680425261211
- type: euclidean_spearman
value: 76.83287843321102
- type: manhattan_pearson
value: 76.65603163327958
- type: manhattan_spearman
value: 76.80803503360451
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.31376078795105
- type: cos_sim_spearman
value: 83.3985199217591
- type: euclidean_pearson
value: 84.06630133719332
- type: euclidean_spearman
value: 83.3985199217591
- type: manhattan_pearson
value: 83.7896654474364
- type: manhattan_spearman
value: 83.1885039212299
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.83161002188668
- type: mrr
value: 96.19253114351153
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 48.132999999999996
- type: map_at_10
value: 58.541
- type: map_at_100
value: 59.34
- type: map_at_1000
value: 59.367999999999995
- type: map_at_3
value: 55.191
- type: map_at_5
value: 57.084
- type: mrr_at_1
value: 51.0
- type: mrr_at_10
value: 59.858
- type: mrr_at_100
value: 60.474000000000004
- type: mrr_at_1000
value: 60.501000000000005
- type: mrr_at_3
value: 57.111000000000004
- type: mrr_at_5
value: 58.694
- type: ndcg_at_1
value: 51.0
- type: ndcg_at_10
value: 63.817
- type: ndcg_at_100
value: 67.229
- type: ndcg_at_1000
value: 67.94
- type: ndcg_at_3
value: 57.896
- type: ndcg_at_5
value: 60.785999999999994
- type: precision_at_1
value: 51.0
- type: precision_at_10
value: 8.933
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 23.111
- type: precision_at_5
value: 15.733
- type: recall_at_1
value: 48.132999999999996
- type: recall_at_10
value: 78.922
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 62.806
- type: recall_at_5
value: 70.078
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.88415841584158
- type: cos_sim_ap
value: 97.72557886493401
- type: cos_sim_f1
value: 94.1294530858003
- type: cos_sim_precision
value: 94.46122860020141
- type: cos_sim_recall
value: 93.8
- type: dot_accuracy
value: 99.88415841584158
- type: dot_ap
value: 97.72557439066108
- type: dot_f1
value: 94.1294530858003
- type: dot_precision
value: 94.46122860020141
- type: dot_recall
value: 93.8
- type: euclidean_accuracy
value: 99.88415841584158
- type: euclidean_ap
value: 97.72557439066108
- type: euclidean_f1
value: 94.1294530858003
- type: euclidean_precision
value: 94.46122860020141
- type: euclidean_recall
value: 93.8
- type: manhattan_accuracy
value: 99.88514851485148
- type: manhattan_ap
value: 97.73324334051959
- type: manhattan_f1
value: 94.1825476429288
- type: manhattan_precision
value: 94.46680080482898
- type: manhattan_recall
value: 93.89999999999999
- type: max_accuracy
value: 99.88514851485148
- type: max_ap
value: 97.73324334051959
- type: max_f1
value: 94.1825476429288
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 72.8168026381278
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 44.30948635130784
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.11268548719803
- type: mrr
value: 55.08079747050335
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.82885852096243
- type: cos_sim_spearman
value: 30.800770979226076
- type: dot_pearson
value: 30.82885608827704
- type: dot_spearman
value: 30.800770979226076
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 66.73038448968596
- type: mrr
value: 77.26510193334836
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.157
- type: map_at_10
value: 79.00399999999999
- type: map_at_100
value: 82.51899999999999
- type: map_at_1000
value: 82.577
- type: map_at_3
value: 55.614
- type: map_at_5
value: 68.292
- type: mrr_at_1
value: 91.167
- type: mrr_at_10
value: 93.391
- type: mrr_at_100
value: 93.467
- type: mrr_at_1000
value: 93.47
- type: mrr_at_3
value: 93.001
- type: mrr_at_5
value: 93.254
- type: ndcg_at_1
value: 91.167
- type: ndcg_at_10
value: 86.155
- type: ndcg_at_100
value: 89.425
- type: ndcg_at_1000
value: 89.983
- type: ndcg_at_3
value: 87.516
- type: ndcg_at_5
value: 86.148
- type: precision_at_1
value: 91.167
- type: precision_at_10
value: 42.697
- type: precision_at_100
value: 5.032
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 76.45100000000001
- type: precision_at_5
value: 64.051
- type: recall_at_1
value: 28.157
- type: recall_at_10
value: 84.974
- type: recall_at_100
value: 95.759
- type: recall_at_1000
value: 98.583
- type: recall_at_3
value: 57.102
- type: recall_at_5
value: 71.383
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 55.031
- type: f1
value: 53.07992810732314
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.20400000000000001
- type: map_at_10
value: 1.27
- type: map_at_100
value: 7.993
- type: map_at_1000
value: 20.934
- type: map_at_3
value: 0.469
- type: map_at_5
value: 0.716
- type: mrr_at_1
value: 76.0
- type: mrr_at_10
value: 84.967
- type: mrr_at_100
value: 84.967
- type: mrr_at_1000
value: 84.967
- type: mrr_at_3
value: 83.667
- type: mrr_at_5
value: 84.967
- type: ndcg_at_1
value: 69.0
- type: ndcg_at_10
value: 59.243
- type: ndcg_at_100
value: 48.784
- type: ndcg_at_1000
value: 46.966
- type: ndcg_at_3
value: 64.14
- type: ndcg_at_5
value: 61.60600000000001
- type: precision_at_1
value: 76.0
- type: precision_at_10
value: 62.6
- type: precision_at_100
value: 50.18
- type: precision_at_1000
value: 21.026
- type: precision_at_3
value: 68.667
- type: precision_at_5
value: 66.0
- type: recall_at_1
value: 0.20400000000000001
- type: recall_at_10
value: 1.582
- type: recall_at_100
value: 11.988
- type: recall_at_1000
value: 44.994
- type: recall_at_3
value: 0.515
- type: recall_at_5
value: 0.844
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 72.80915114296552
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 70.86374654127641
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 3.3009999999999997
- type: map_at_10
value: 11.566
- type: map_at_100
value: 17.645
- type: map_at_1000
value: 19.206
- type: map_at_3
value: 6.986000000000001
- type: map_at_5
value: 8.716
- type: mrr_at_1
value: 42.857
- type: mrr_at_10
value: 58.287
- type: mrr_at_100
value: 59.111000000000004
- type: mrr_at_1000
value: 59.111000000000004
- type: mrr_at_3
value: 55.102
- type: mrr_at_5
value: 57.449
- type: ndcg_at_1
value: 39.796
- type: ndcg_at_10
value: 29.059
- type: ndcg_at_100
value: 40.629
- type: ndcg_at_1000
value: 51.446000000000005
- type: ndcg_at_3
value: 36.254999999999995
- type: ndcg_at_5
value: 32.216
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 8.041
- type: precision_at_1000
value: 1.551
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 30.203999999999997
- type: recall_at_1
value: 3.3009999999999997
- type: recall_at_10
value: 17.267
- type: recall_at_100
value: 49.36
- type: recall_at_1000
value: 83.673
- type: recall_at_3
value: 8.049000000000001
- type: recall_at_5
value: 11.379999999999999
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 88.7576
- type: ap
value: 35.52110634325751
- type: f1
value: 74.14476947482417
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 73.52009054895304
- type: f1
value: 73.81407409876577
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 54.35358706465052
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.65619598259522
- type: cos_sim_ap
value: 65.824087818991
- type: cos_sim_f1
value: 61.952620244077536
- type: cos_sim_precision
value: 56.676882661996494
- type: cos_sim_recall
value: 68.311345646438
- type: dot_accuracy
value: 83.65619598259522
- type: dot_ap
value: 65.82406256999921
- type: dot_f1
value: 61.952620244077536
- type: dot_precision
value: 56.676882661996494
- type: dot_recall
value: 68.311345646438
- type: euclidean_accuracy
value: 83.65619598259522
- type: euclidean_ap
value: 65.82409143427542
- type: euclidean_f1
value: 61.952620244077536
- type: euclidean_precision
value: 56.676882661996494
- type: euclidean_recall
value: 68.311345646438
- type: manhattan_accuracy
value: 83.4296954163438
- type: manhattan_ap
value: 65.20662449614932
- type: manhattan_f1
value: 61.352885525070946
- type: manhattan_precision
value: 55.59365623660523
- type: manhattan_recall
value: 68.44327176781002
- type: max_accuracy
value: 83.65619598259522
- type: max_ap
value: 65.82409143427542
- type: max_f1
value: 61.952620244077536
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.90119144642372
- type: cos_sim_ap
value: 84.04753852793387
- type: cos_sim_f1
value: 76.27737226277372
- type: cos_sim_precision
value: 73.86757068667052
- type: cos_sim_recall
value: 78.84970742223591
- type: dot_accuracy
value: 87.90119144642372
- type: dot_ap
value: 84.04753668117337
- type: dot_f1
value: 76.27737226277372
- type: dot_precision
value: 73.86757068667052
- type: dot_recall
value: 78.84970742223591
- type: euclidean_accuracy
value: 87.90119144642372
- type: euclidean_ap
value: 84.04754553468206
- type: euclidean_f1
value: 76.27737226277372
- type: euclidean_precision
value: 73.86757068667052
- type: euclidean_recall
value: 78.84970742223591
- type: manhattan_accuracy
value: 87.87014398261343
- type: manhattan_ap
value: 84.05164646221583
- type: manhattan_f1
value: 76.31392706820128
- type: manhattan_precision
value: 73.91586694566708
- type: manhattan_recall
value: 78.87280566676932
- type: max_accuracy
value: 87.90119144642372
- type: max_ap
value: 84.05164646221583
- type: max_f1
value: 76.31392706820128
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 63.6
- type: map_at_10
value: 72.673
- type: map_at_100
value: 73.05199999999999
- type: map_at_1000
value: 73.057
- type: map_at_3
value: 70.833
- type: map_at_5
value: 72.05799999999999
- type: mrr_at_1
value: 63.6
- type: mrr_at_10
value: 72.673
- type: mrr_at_100
value: 73.05199999999999
- type: mrr_at_1000
value: 73.057
- type: mrr_at_3
value: 70.833
- type: mrr_at_5
value: 72.05799999999999
- type: ndcg_at_1
value: 63.6
- type: ndcg_at_10
value: 76.776
- type: ndcg_at_100
value: 78.52900000000001
- type: ndcg_at_1000
value: 78.696
- type: ndcg_at_3
value: 73.093
- type: ndcg_at_5
value: 75.288
- type: precision_at_1
value: 63.6
- type: precision_at_10
value: 8.95
- type: precision_at_100
value: 0.975
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 26.533
- type: precision_at_5
value: 16.98
- type: recall_at_1
value: 63.6
- type: recall_at_10
value: 89.5
- type: recall_at_100
value: 97.5
- type: recall_at_1000
value: 98.9
- type: recall_at_3
value: 79.60000000000001
- type: recall_at_5
value: 84.89999999999999
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.39999999999999
- type: ap
value: 75.52087544076016
- type: f1
value: 87.7629629899278
---
<p align="center">
<img src="images/gme_logo.png" alt="GME Logo" style="width: 100%; max-width: 450px;">
</p>
<p align="center"><b>GME: General Multimodal Embedding</b></p>
## GME-Qwen2-VL-2B
We are excited to present `GME-Qwen2VL` series of unified **multimodal embedding models**,
which are based on the advanced [Qwen2-VL](https://huggingface.co/collections/Qwen/qwen2-vl-66cee7455501d7126940800d) multimodal large language models (MLLMs).
The `GME` models support three types of input: **text**, **image**, and **image-text pair**, all of which can produce universal vector representations and have powerful retrieval performance.
**Key Enhancements of GME Models**:
- **Unified Multimodal Representation**: GME models can process both single-modal and combined-modal inputs, resulting in a unified vector representation. This enables versatile retrieval scenarios (Any2Any Search), supporting tasks such as text retrieval, image retrieval from text, and image-to-image searches.
- **High Performance**: Achieves state-of-the-art (SOTA) results in our universal multimodal retrieval benchmark (**UMRB**) and demonstrate strong evaluation scores in the Multimodal Textual Evaluation Benchmark (**MTEB**).
- **Dynamic Image Resolution**: Benefiting from `Qwen2-VL` and our training data, GME models support dynamic resolution image input.
- **Strong Visual Retrieval Performance**: Enhanced by the Qwen2-VL model series, our models excel in visual document retrieval tasks that require a nuanced understanding of document screenshots.
This capability is particularly beneficial for complex document understanding scenarios,
such as multimodal retrieval-augmented generation (RAG) applications focused on academic papers.
**Developed by**: Tongyi Lab, Alibaba Group
**Paper**: [GME: Improving Universal Multimodal Retrieval by Multimodal LLMs](http://arxiv.org/abs/2412.16855)
## Model List
| Models | Model Size | Max Seq. Length | Dimension | MTEB-en| MTEB-zh | UMRB |
|:-----: | :-----: |:-----: |:-----: |:-----: | :-----: | :-----: |
|[`gme-Qwen2-VL-2B`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-2B-Instruct) | 2.21B | 32768 | 1536 | 65.27 | 66.92 | 64.45 |
|[`gme-Qwen2-VL-7B`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct) | 8.29B | 32768 | 3584 | 67.48 | 69.73 | 67.44 |
## Usage
**Use with custom code**
```python
# You can find the script gme_inference.py in https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-2B-Instruct/blob/main/gme_inference.py
from gme_inference import GmeQwen2VL
texts = [
"What kind of car is this?",
"The Tesla Cybertruck is a battery electric pickup truck built by Tesla, Inc. since 2023."
]
images = [
'https://en.wikipedia.org/wiki/File:Tesla_Cybertruck_damaged_window.jpg',
'https://en.wikipedia.org/wiki/File:2024_Tesla_Cybertruck_Foundation_Series,_front_left_(Greenwich).jpg',
]
gme = GmeQwen2VL("Alibaba-NLP/gme-Qwen2-VL-2B-Instruct")
# Single-modal embedding
e_text = gme.get_text_embeddings(texts=texts)
e_image = gme.get_image_embeddings(images=images)
print((e_text * e_image).sum(-1))
## tensor([0.2281, 0.6001], dtype=torch.float16)
# How to set embedding instruction
e_query = gme.get_text_embeddings(texts=texts, instruction='Find an image that matches the given text.')
# If is_query=False, we always use the default instruction.
e_corpus = gme.get_image_embeddings(images=images, is_query=False)
print((e_query * e_corpus).sum(-1))
## tensor([0.2433, 0.7051], dtype=torch.float16)
# Fused-modal embedding
e_fused = gme.get_fused_embeddings(texts=texts, images=images)
print((e_fused[0] * e_fused[1]).sum())
## tensor(0.6108, dtype=torch.float16)
```
## Evaluation
We validated the performance on our universal multimodal retrieval benchmark (**UMRB**) among others.
| | | Single-modal | | Cross-modal | | | Fused-modal | | | | Avg. |
|--------------------|------|:------------:|:---------:|:-----------:|:-----------:|:---------:|:-----------:|:----------:|:----------:|:-----------:|:----------:|
| | | T→T (16) | I→I (1) | T→I (4) | T→VD (10) | I→T (4) | T→IT (2) | IT→T (5) | IT→I (2) | IT→IT (3) | (47) |
| VISTA | 0.2B | 55.15 | **31.98** | 32.88 | 10.12 | 31.23 | 45.81 | 53.32 | 8.97 | 26.26 | 37.32 |
| CLIP-SF | 0.4B | 39.75 | 31.42 | 59.05 | 24.09 | 62.95 | 66.41 | 53.32 | 34.9 | 55.65 | 43.66 |
| One-Peace | 4B | 43.54 | 31.27 | 61.38 | 42.9 | 65.59 | 42.72 | 28.29 | 6.73 | 23.41 | 42.01 |
| DSE | 4.2B | 48.94 | 27.92 | 40.75 | 78.21 | 52.54 | 49.62 | 35.44 | 8.36 | 40.18 | 50.04 |
| E5-V | 8.4B | 52.41 | 27.36 | 46.56 | 41.22 | 47.95 | 54.13 | 32.9 | 23.17 | 7.23 | 42.52 |
| **[GME-Qwen2-VL-2B](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-2B-Instruct)** | 2.2B | 55.93 | 29.86 | 57.36 | 87.84 | 61.93 | 76.47 | 64.58 | 37.02 | 66.47 | 64.45 |
| **[GME-Qwen2-VL-7B](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct)** | 8.3B | **58.19** | 31.89 | **61.35** | **89.92** | **65.83** | **80.94** | **66.18** | **42.56** | **73.62** | **67.44** |
The [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) English tab shows the text embeddings performence of our model.
**More detailed experimental results can be found in the [paper](http://arxiv.org/abs/2412.16855)**.
## Community support
### Fine-tuning
GME models can be fine-tuned by SWIFT:
```shell
pip install ms-swift -U
```
```shell
# MAX_PIXELS settings to reduce memory usage
# check: https://swift.readthedocs.io/en/latest/BestPractices/Embedding.html
nproc_per_node=8
MAX_PIXELS=1003520 \
USE_HF=1 \
NPROC_PER_NODE=$nproc_per_node \
swift sft \
--model Alibaba-NLP/gme-Qwen2-VL-2B-Instruct \
--train_type lora \
--dataset 'HuggingFaceM4/TextCaps:emb' \
--torch_dtype bfloat16 \
--num_train_epochs 1 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 2 \
--gradient_accumulation_steps $(expr 64 / $nproc_per_node) \
--eval_steps 100 \
--save_steps 100 \
--eval_strategy steps \
--save_total_limit 5 \
--logging_steps 5 \
--output_dir output \
--lazy_tokenize true \
--warmup_ratio 0.05 \
--learning_rate 5e-6 \
--deepspeed zero3 \
--dataloader_num_workers 4 \
--task_type embedding \
--loss_type infonce \
--dataloader_drop_last true
```
## Limitations
- **Single Image Input**: In `Qwen2-VL`, an image could be converted into a very large number of visual tokens. We limit the number of visual tokens to 1024 to obtain a good training efficiency.
Due to the lack of relevant data, our models and evaluations retain one single image.
- **English-only Training**: Our models are trained on english data only. Although the `Qwen2-VL` models are multilingual, the multilingual-multimodal embedding performance are not guaranteed.
We will extend to multi-image input, image-text interleaved data as well as multilingual data in the future version.
## Redistribution and Use
We encourage and value diverse applications of GME models and continuous enhancements to the models themselves.
- If you distribute or make GME models (or any derivative works) available, or if you create a product or service (including another AI model) that incorporates them, you must prominently display `Built with GME` on your website, user interface, blog post, About page, or product documentation.
- If you utilize GME models or their outputs to develop, train, fine-tune, or improve an AI model that is distributed or made available, you must prefix the name of any such AI model with `GME`.
## Cloud API Services
In addition to the open-source [GME](https://huggingface.co/collections/Alibaba-NLP/gme-models-67667e092da3491f630964d6) series models, GME series models are also available as commercial API services on Alibaba Cloud.
- [MultiModal Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/multimodal-embedding-api-reference?spm=a2c4g.11186623.0.0.321c1d1cqmoJ5C): The `multimodal-embedding-v1` model service is available.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
## Hiring
We have open positions for Research Interns and Full-Time Researchers to join our team at Tongyi Lab.
We are seeking passionate individuals with expertise in representation learning, LLM-driven information retrieval, Retrieval-Augmented Generation (RAG), and agent-based systems.
Our team is located in the vibrant cities of Beijing and Hangzhou, offering a collaborative and dynamic work environment where you can contribute to cutting-edge advancements in artificial intelligence and machine learning.
If you are driven by curiosity and eager to make a meaningful impact through your work, we would love to hear from you. Please submit your resume along with a brief introduction to <a href="mailto:[email protected]">[email protected]</a>.
## Citation
If you find our paper or models helpful, please consider cite:
```
@misc{zhang2024gme,
title={GME: Improving Universal Multimodal Retrieval by Multimodal LLMs},
author={Zhang, Xin and Zhang, Yanzhao and Xie, Wen and Li, Mingxin and Dai, Ziqi and Long, Dingkun and Xie, Pengjun and Zhang, Meishan and Li, Wenjie and Zhang, Min},
year={2024},
eprint={2412.16855},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={http://arxiv.org/abs/2412.16855},
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] | Non_BioNLP |
Subsets and Splits